* [PATCH v13 0/5] Support add/remove memory region and get-max-slots
@ 2026-05-14 2:01 pravin.bathija
2026-05-14 2:01 ` [PATCH v13 1/5] vhost: add user to mailmap and define to vhost hdr pravin.bathija
` (4 more replies)
0 siblings, 5 replies; 30+ messages in thread
From: pravin.bathija @ 2026-05-14 2:01 UTC (permalink / raw)
To: dev, fengchengwen, stephen, maxime.coquelin; +Cc: pravin.bathija, thomas
From: Pravin M Bathija <pravin.bathija@dell.com>
This is version v13 of the patchset and it incorporates the
recommendations made by Fengcheng Wen.
Changes made to patch 3/5 and 4/5
* Relocated function remove_guest_pages from patch 3/5 to 4/5.
* Renamed VhostUserSingleMemReg to VhostUserMemRegMsg and memory_single
to memreg.
This implementation has been extensively tested by doing Read/Write I/O
from multiple instances of fio + libblkio (front-end) talking to
spdk/dpdk (back-end) based drives. Tested with qemu front-end talking to
dpdk testpmd (back-end) performing add/removal of memory regions. Also
tested post-copy live migration after doing add_memory_region.
Version Log:
Version v13 (Current version): Incorporate code review suggestions from
Fengcheng Wen as described above.
Version v12: Incorporate code review suggestions from Maxime Coquelin
and ai-code-review.
Changes made to patch 3/5
Refactored async_dma_map() to delegate to async_dma_map_region(),
eliminating code duplication between the two functions.
Restored original comments in async_dma_map_region() explaining why
ENODEV and EINVAL errors are ignored (these were stripped in v10)
Reverted unnecessary changes to vhost_user_postcopy_register() --
removed the host_user_addr == 0 checks and reg_msg_index indirection
that were added in v10, since this function is only called from
vhost_user_set_mem_table() where regions are always contiguous.
Version v11: Incorporate code review suggestions from Stephen Hemminger.
Change made to patch 4/5
Fix incomplete cleanup in vhost_user_add_mem_reg() when
vhost_user_mmap_region() fails after the mmap succeeds (e.g.
add_guest_pages() realloc failure) realloc failure). The error path now
calls remove_guest_pages() and free_mem_region() to undo the mapping
and stale guest-page entries, preventing a leaked mmap and slot reuse
corruption. The plain close(fd) path is kept for pre-mmap failures.
Version v10: Incorporate code review suggestions from Stephen Hemminger.
Change made to patch 4/5
Moved dev_invalidate_vrings after free_mem_region, array compaction, and
nregions decrement. This ensures translate_ring_addresses only sees
surviving memory regions, preventing vring pointers from resolving into
a region that is about to be unmapped.
Version v9: Incorporate code review suggestions from Stephen Hemminger.
Changes made to patch 3/5
Restored max_guest_pages initial value to hardcoded 8 instead of
VHOST_MEMORY_MAX_NREGIONS, matching upstream semantics.
Changes made to patch 4/5
Added close(reg->fd) and reg->fd = -1 before goto close_msg_fds in the
mmap failure path to fix fd leak after fd was moved from ctx->fds[0].
Converted dev_invalidate_vrings from a plain function to a macro +
implementation function pair, accepting message ID as a parameter so
the static_assert reports the correct handler at each call site.
Updated dev_invalidate_vrings call in add_mem_reg to pass
VHOST_USER_ADD_MEM_REG as message ID.
Updated dev_invalidate_vrings call in rem_mem_reg to pass
VHOST_USER_REM_MEM_REG as message ID.
Version v8: Incorporate code review suggestions from Stephen Hemminger.
rewrite async_dma_map_region function to iterate guest pages by host
address range matching
change function dev_invalidate_vrings to accept a double pointer to
propagate pointer updates
new function remove_guest_pages was added
add_mem_reg error path was narrowed to only clean up the single failed
region instead of destroting all existing regions
Version v7: Incorporate code review suggestions from Maxime Coquelin.
Add debug messages to vhost_postcopy_register function.
Version v6: Added the enablement of this feature as a final patch in
this patch-set and other code optimizations as suggested by Maxime
Coquelin.
Version v5: removed the patch that increased the number of memory regions
from 8 to 128. This will be submitted as a separate feature at a later
point after incorporating additional optimizations. Also includes code
optimizations as suggested by Feng Cheng Wen.
Version v4: code optimizations as suggested by Feng Cheng Wen.
Version v3: code optimizations as suggested by Maxime Coquelin
and Thomas Monjalon.
Version v2: code optimizations as suggested by Maxime Coquelin.
Version v1: Initial patch set.
Pravin M Bathija (5):
vhost: add user to mailmap and define to vhost hdr
vhost_user: header defines for add/rem mem region
vhost_user: support function defines for back-end
vhost_user: Function defs for add/rem mem regions
vhost_user: enable configure memory slots
.mailmap | 1 +
lib/vhost/rte_vhost.h | 4 +
lib/vhost/vhost_user.c | 418 +++++++++++++++++++++++++++++++++++------
lib/vhost/vhost_user.h | 10 +
4 files changed, 371 insertions(+), 62 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v13 1/5] vhost: add user to mailmap and define to vhost hdr
2026-05-14 2:01 [PATCH v13 0/5] Support add/remove memory region and get-max-slots pravin.bathija
@ 2026-05-14 2:01 ` pravin.bathija
2026-05-14 2:01 ` [PATCH v13 2/5] vhost_user: header defines for add/rem mem region pravin.bathija
` (3 subsequent siblings)
4 siblings, 0 replies; 30+ messages in thread
From: pravin.bathija @ 2026-05-14 2:01 UTC (permalink / raw)
To: dev, fengchengwen, stephen, maxime.coquelin; +Cc: pravin.bathija, thomas
From: Pravin M Bathija <pravin.bathija@dell.com>
- add user to mailmap file.
- define a bit-field called VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS
that depicts if the feature/capability to add/remove memory regions
is supported. This is a part of the overall support for add/remove
memory region feature in this patchset.
Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
---
.mailmap | 1 +
lib/vhost/rte_vhost.h | 4 ++++
2 files changed, 5 insertions(+)
diff --git a/.mailmap b/.mailmap
index 0e0d83e1c6..cc44e27036 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1295,6 +1295,7 @@ Prateek Agarwal <prateekag@cse.iitb.ac.in>
Prathisna Padmasanan <prathisna.padmasanan@intel.com>
Praveen Kaligineedi <pkaligineedi@google.com>
Praveen Shetty <praveen.shetty@intel.com>
+Pravin M Bathija <pravin.bathija@dell.com>
Pravin Pathak <pravin.pathak.dev@gmail.com> <pravin.pathak@intel.com>
Prince Takkar <ptakkar@marvell.com>
Priyalee Kushwaha <priyalee.kushwaha@intel.com>
diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
index 2f7c4c0080..a7f9700538 100644
--- a/lib/vhost/rte_vhost.h
+++ b/lib/vhost/rte_vhost.h
@@ -109,6 +109,10 @@ extern "C" {
#define VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD 12
#endif
+#ifndef VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS
+#define VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS 15
+#endif
+
#ifndef VHOST_USER_PROTOCOL_F_STATUS
#define VHOST_USER_PROTOCOL_F_STATUS 16
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v13 2/5] vhost_user: header defines for add/rem mem region
2026-05-14 2:01 [PATCH v13 0/5] Support add/remove memory region and get-max-slots pravin.bathija
2026-05-14 2:01 ` [PATCH v13 1/5] vhost: add user to mailmap and define to vhost hdr pravin.bathija
@ 2026-05-14 2:01 ` pravin.bathija
2026-05-14 2:01 ` [PATCH v13 3/5] vhost_user: support function defines for back-end pravin.bathija
` (2 subsequent siblings)
4 siblings, 0 replies; 30+ messages in thread
From: pravin.bathija @ 2026-05-14 2:01 UTC (permalink / raw)
To: dev, fengchengwen, stephen, maxime.coquelin; +Cc: pravin.bathija, thomas
From: Pravin M Bathija <pravin.bathija@dell.com>
The changes in this file cover the enum message requests for
supporting add/remove memory regions. The front-end vhost-user
client sends messages like get max memory slots, add memory region
and remove memory region which are covered in these changes which
are on the vhost-user back-end. The changes also include data structure
definition of memory region to be added/removed. The data structure
VhostUserMsg has been changed to include the memory region.
Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
---
lib/vhost/vhost_user.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
index ef486545ba..6435816534 100644
--- a/lib/vhost/vhost_user.h
+++ b/lib/vhost/vhost_user.h
@@ -67,6 +67,9 @@ typedef enum VhostUserRequest {
VHOST_USER_POSTCOPY_END = 30,
VHOST_USER_GET_INFLIGHT_FD = 31,
VHOST_USER_SET_INFLIGHT_FD = 32,
+ VHOST_USER_GET_MAX_MEM_SLOTS = 36,
+ VHOST_USER_ADD_MEM_REG = 37,
+ VHOST_USER_REM_MEM_REG = 38,
VHOST_USER_SET_STATUS = 39,
VHOST_USER_GET_STATUS = 40,
} VhostUserRequest;
@@ -91,6 +94,11 @@ typedef struct VhostUserMemory {
VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS];
} VhostUserMemory;
+typedef struct VhostUserMemRegMsg {
+ uint64_t padding;
+ VhostUserMemoryRegion region;
+} VhostUserMemRegMsg;
+
typedef struct VhostUserLog {
uint64_t mmap_size;
uint64_t mmap_offset;
@@ -186,6 +194,7 @@ typedef struct __rte_packed_begin VhostUserMsg {
struct vhost_vring_state state;
struct vhost_vring_addr addr;
VhostUserMemory memory;
+ VhostUserMemRegMsg memreg;
VhostUserLog log;
struct vhost_iotlb_msg iotlb;
VhostUserCryptoSessionParam crypto_session;
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v13 3/5] vhost_user: support function defines for back-end
2026-05-14 2:01 [PATCH v13 0/5] Support add/remove memory region and get-max-slots pravin.bathija
2026-05-14 2:01 ` [PATCH v13 1/5] vhost: add user to mailmap and define to vhost hdr pravin.bathija
2026-05-14 2:01 ` [PATCH v13 2/5] vhost_user: header defines for add/rem mem region pravin.bathija
@ 2026-05-14 2:01 ` pravin.bathija
2026-05-14 2:01 ` [PATCH v13 4/5] vhost_user: Function defs for add/rem mem regions pravin.bathija
2026-05-14 2:01 ` [PATCH v13 5/5] vhost_user: enable configure memory slots pravin.bathija
4 siblings, 0 replies; 30+ messages in thread
From: pravin.bathija @ 2026-05-14 2:01 UTC (permalink / raw)
To: dev, fengchengwen, stephen, maxime.coquelin; +Cc: pravin.bathija, thomas
From: Pravin M Bathija <pravin.bathija@dell.com>
Here we define support functions which are called from the various
vhost-user back-end message functions like set memory table, get
memory slots, add memory region, remove memory region. These are
essentially common functions to unmap a set of memory regions,
perform register copy, align memory addresses and dma map/unmap a
single memory region.
Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
---
lib/vhost/vhost_user.c | 89 ++++++++++++++++++++++++++++--------------
1 file changed, 60 insertions(+), 29 deletions(-)
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 4bfb13fb98..0ee3fe7a5e 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -171,20 +171,27 @@ get_blk_size(int fd)
return ret == -1 ? (uint64_t)-1 : (uint64_t)stat.st_blksize;
}
-static void
-async_dma_map(struct virtio_net *dev, bool do_map)
+static int
+async_dma_map_region(struct virtio_net *dev, struct rte_vhost_mem_region *reg, bool do_map)
{
- int ret = 0;
uint32_t i;
- struct guest_page *page;
+ int ret;
+ uint64_t reg_start = reg->host_user_addr;
+ uint64_t reg_end = reg_start + reg->size;
+
+ for (i = 0; i < dev->nr_guest_pages; i++) {
+ struct guest_page *page = &dev->guest_pages[i];
+
+ /* Only process pages belonging to this region */
+ if (page->host_user_addr < reg_start ||
+ page->host_user_addr >= reg_end)
+ continue;
- if (do_map) {
- for (i = 0; i < dev->nr_guest_pages; i++) {
- page = &dev->guest_pages[i];
+ if (do_map) {
ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD,
- page->host_user_addr,
- page->host_iova,
- page->size);
+ page->host_user_addr,
+ page->host_iova,
+ page->size);
if (ret) {
/*
* DMA device may bind with kernel driver, in this case,
@@ -199,33 +206,57 @@ async_dma_map(struct virtio_net *dev, bool do_map)
* normal case in async path. This is a workaround.
*/
if (rte_errno == ENODEV)
- return;
+ return 0;
/* DMA mapping errors won't stop VHOST_USER_SET_MEM_TABLE. */
VHOST_CONFIG_LOG(dev->ifname, ERR, "DMA engine map failed");
+ return -1;
}
- }
-
- } else {
- for (i = 0; i < dev->nr_guest_pages; i++) {
- page = &dev->guest_pages[i];
+ } else {
ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD,
- page->host_user_addr,
- page->host_iova,
- page->size);
+ page->host_user_addr,
+ page->host_iova,
+ page->size);
if (ret) {
/* like DMA map, ignore the kernel driver case when unmap. */
if (rte_errno == EINVAL)
- return;
+ return 0;
VHOST_CONFIG_LOG(dev->ifname, ERR, "DMA engine unmap failed");
+ return -1;
}
}
}
+
+ return 0;
+}
+
+static void
+async_dma_map(struct virtio_net *dev, bool do_map)
+{
+ uint32_t i;
+ struct rte_vhost_mem_region *reg;
+
+ for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
+ reg = &dev->mem->regions[i];
+ if (reg->host_user_addr == 0)
+ continue;
+ async_dma_map_region(dev, reg, do_map);
+ }
}
static void
-free_mem_region(struct virtio_net *dev)
+free_mem_region(struct rte_vhost_mem_region *reg)
+{
+ if (reg != NULL && reg->mmap_addr) {
+ munmap(reg->mmap_addr, reg->mmap_size);
+ close(reg->fd);
+ memset(reg, 0, sizeof(struct rte_vhost_mem_region));
+ }
+}
+
+static void
+free_all_mem_regions(struct virtio_net *dev)
{
uint32_t i;
struct rte_vhost_mem_region *reg;
@@ -236,12 +267,10 @@ free_mem_region(struct virtio_net *dev)
if (dev->async_copy && rte_vfio_is_enabled("vfio"))
async_dma_map(dev, false);
- for (i = 0; i < dev->mem->nregions; i++) {
+ for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
reg = &dev->mem->regions[i];
- if (reg->host_user_addr) {
- munmap(reg->mmap_addr, reg->mmap_size);
- close(reg->fd);
- }
+ if (reg->mmap_addr)
+ free_mem_region(reg);
}
}
@@ -255,7 +284,7 @@ vhost_backend_cleanup(struct virtio_net *dev)
vdpa_dev->ops->dev_cleanup(dev->vid);
if (dev->mem) {
- free_mem_region(dev);
+ free_all_mem_regions(dev);
rte_free(dev->mem);
dev->mem = NULL;
}
@@ -704,7 +733,7 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
vhost_devices[dev->vid] = dev;
mem_size = sizeof(struct rte_vhost_memory) +
- sizeof(struct rte_vhost_mem_region) * dev->mem->nregions;
+ sizeof(struct rte_vhost_mem_region) * VHOST_MEMORY_MAX_NREGIONS;
mem = rte_realloc_socket(dev->mem, mem_size, 0, node);
if (!mem) {
VHOST_CONFIG_LOG(dev->ifname, ERR,
@@ -808,8 +837,10 @@ hua_to_alignment(struct rte_vhost_memory *mem, void *ptr)
uint32_t i;
uintptr_t hua = (uintptr_t)ptr;
- for (i = 0; i < mem->nregions; i++) {
+ for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
r = &mem->regions[i];
+ if (r->host_user_addr == 0)
+ continue;
if (hua >= r->host_user_addr &&
hua < r->host_user_addr + r->size) {
return get_blk_size(r->fd);
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v13 4/5] vhost_user: Function defs for add/rem mem regions
2026-05-14 2:01 [PATCH v13 0/5] Support add/remove memory region and get-max-slots pravin.bathija
` (2 preceding siblings ...)
2026-05-14 2:01 ` [PATCH v13 3/5] vhost_user: support function defines for back-end pravin.bathija
@ 2026-05-14 2:01 ` pravin.bathija
2026-05-14 2:01 ` [PATCH v13 5/5] vhost_user: enable configure memory slots pravin.bathija
4 siblings, 0 replies; 30+ messages in thread
From: pravin.bathija @ 2026-05-14 2:01 UTC (permalink / raw)
To: dev, fengchengwen, stephen, maxime.coquelin; +Cc: pravin.bathija, thomas
From: Pravin M Bathija <pravin.bathija@dell.com>
These changes cover the function definition for add/remove memory
region calls which are invoked on receiving vhost user message from
vhost user front-end (e.g. Qemu). In our case, in addition to testing
with qemu front-end, the testing has also been performed with libblkio
front-end and spdk/dpdk back-end. We did I/O using libblkio based device
driver, to spdk based drives.
There are also changes for set_mem_table and new definition for get memory
slots. Our changes optimize the set memory table call to use common support
functions. A new vhost_user_initialize_memory() function is introduced to
factor out the common memory initialization logic from the function
vhost_user_set_mem_table(), which is now called from both the SET_MEM_TABLE
message handler and the ADD_MEM_REG handler (for the first region).
Message get memory slots is how the vhost-user front-end queries the
vhost-user back-end about the number of memory slots available to be
registered by the back-end. In addition support function to invalidate
vring is also defined which is used in add/remove memory region functions.
The helper function remove_guest_pages is also defined here which is called
from vhost_user_add_mem_reg.
Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
---
lib/vhost/vhost_user.c | 329 ++++++++++++++++++++++++++++++++++++-----
1 file changed, 296 insertions(+), 33 deletions(-)
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 0ee3fe7a5e..fdcb7e0158 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -71,6 +71,9 @@ VHOST_MESSAGE_HANDLER(VHOST_USER_SET_FEATURES, vhost_user_set_features, false, t
VHOST_MESSAGE_HANDLER(VHOST_USER_SET_OWNER, vhost_user_set_owner, false, true) \
VHOST_MESSAGE_HANDLER(VHOST_USER_RESET_OWNER, vhost_user_reset_owner, false, false) \
VHOST_MESSAGE_HANDLER(VHOST_USER_SET_MEM_TABLE, vhost_user_set_mem_table, true, true) \
+VHOST_MESSAGE_HANDLER(VHOST_USER_GET_MAX_MEM_SLOTS, vhost_user_get_max_mem_slots, false, false) \
+VHOST_MESSAGE_HANDLER(VHOST_USER_ADD_MEM_REG, vhost_user_add_mem_reg, true, true) \
+VHOST_MESSAGE_HANDLER(VHOST_USER_REM_MEM_REG, vhost_user_rem_mem_reg, true, true) \
VHOST_MESSAGE_HANDLER(VHOST_USER_SET_LOG_BASE, vhost_user_set_log_base, true, true) \
VHOST_MESSAGE_HANDLER(VHOST_USER_SET_LOG_FD, vhost_user_set_log_fd, true, true) \
VHOST_MESSAGE_HANDLER(VHOST_USER_SET_VRING_NUM, vhost_user_set_vring_num, false, true) \
@@ -1167,6 +1170,24 @@ add_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg,
return 0;
}
+static void
+remove_guest_pages(struct virtio_net *dev, struct rte_vhost_mem_region *reg)
+{
+ uint64_t reg_start = reg->host_user_addr;
+ uint64_t reg_end = reg_start + reg->size;
+ uint32_t i, j = 0;
+
+ for (i = 0; i < dev->nr_guest_pages; i++) {
+ if (dev->guest_pages[i].host_user_addr >= reg_start &&
+ dev->guest_pages[i].host_user_addr < reg_end)
+ continue;
+ if (j != i)
+ dev->guest_pages[j] = dev->guest_pages[i];
+ j++;
+ }
+ dev->nr_guest_pages = j;
+}
+
#ifdef RTE_LIBRTE_VHOST_DEBUG
/* TODO: enable it only in debug mode? */
static void
@@ -1413,6 +1434,52 @@ vhost_user_mmap_region(struct virtio_net *dev,
return 0;
}
+static int
+vhost_user_initialize_memory(struct virtio_net **pdev)
+{
+ struct virtio_net *dev = *pdev;
+ int numa_node = SOCKET_ID_ANY;
+
+ if (dev->mem != NULL) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "memory already initialized, free it first");
+ return -1;
+ }
+
+ /*
+ * If VQ 0 has already been allocated, try to allocate on the same
+ * NUMA node. It can be reallocated later in numa_realloc().
+ */
+ if (dev->nr_vring > 0)
+ numa_node = dev->virtqueue[0]->numa_node;
+
+ dev->nr_guest_pages = 0;
+ if (dev->guest_pages == NULL) {
+ dev->max_guest_pages = 8;
+ dev->guest_pages = rte_zmalloc_socket(NULL,
+ dev->max_guest_pages *
+ sizeof(struct guest_page),
+ RTE_CACHE_LINE_SIZE,
+ numa_node);
+ if (dev->guest_pages == NULL) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "failed to allocate memory for dev->guest_pages");
+ return -1;
+ }
+ }
+
+ dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) +
+ sizeof(struct rte_vhost_mem_region) * VHOST_MEMORY_MAX_NREGIONS, 0, numa_node);
+ if (dev->mem == NULL) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem");
+ rte_free(dev->guest_pages);
+ dev->guest_pages = NULL;
+ return -1;
+ }
+
+ return 0;
+}
+
static int
vhost_user_set_mem_table(struct virtio_net **pdev,
struct vhu_msg_context *ctx,
@@ -1421,7 +1488,6 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
struct virtio_net *dev = *pdev;
struct VhostUserMemory *memory = &ctx->msg.payload.memory;
struct rte_vhost_mem_region *reg;
- int numa_node = SOCKET_ID_ANY;
uint64_t mmap_offset;
uint32_t i;
bool async_notify = false;
@@ -1466,39 +1532,13 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
vhost_user_iotlb_flush_all(dev);
- free_mem_region(dev);
+ free_all_mem_regions(dev);
rte_free(dev->mem);
dev->mem = NULL;
}
- /*
- * If VQ 0 has already been allocated, try to allocate on the same
- * NUMA node. It can be reallocated later in numa_realloc().
- */
- if (dev->nr_vring > 0)
- numa_node = dev->virtqueue[0]->numa_node;
-
- dev->nr_guest_pages = 0;
- if (dev->guest_pages == NULL) {
- dev->max_guest_pages = 8;
- dev->guest_pages = rte_zmalloc_socket(NULL,
- dev->max_guest_pages *
- sizeof(struct guest_page),
- RTE_CACHE_LINE_SIZE,
- numa_node);
- if (dev->guest_pages == NULL) {
- VHOST_CONFIG_LOG(dev->ifname, ERR,
- "failed to allocate memory for dev->guest_pages");
- goto close_msg_fds;
- }
- }
-
- dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) +
- sizeof(struct rte_vhost_mem_region) * memory->nregions, 0, numa_node);
- if (dev->mem == NULL) {
- VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem");
- goto free_guest_pages;
- }
+ if (vhost_user_initialize_memory(pdev) < 0)
+ goto close_msg_fds;
for (i = 0; i < memory->nregions; i++) {
reg = &dev->mem->regions[i];
@@ -1562,11 +1602,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
return RTE_VHOST_MSG_RESULT_OK;
free_mem_table:
- free_mem_region(dev);
+ free_all_mem_regions(dev);
rte_free(dev->mem);
dev->mem = NULL;
-
-free_guest_pages:
rte_free(dev->guest_pages);
dev->guest_pages = NULL;
close_msg_fds:
@@ -1574,6 +1612,231 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
return RTE_VHOST_MSG_RESULT_ERR;
}
+
+static int
+vhost_user_get_max_mem_slots(struct virtio_net **pdev __rte_unused,
+ struct vhu_msg_context *ctx,
+ int main_fd __rte_unused)
+{
+ uint32_t max_mem_slots = VHOST_MEMORY_MAX_NREGIONS;
+
+ ctx->msg.payload.u64 = (uint64_t)max_mem_slots;
+ ctx->msg.size = sizeof(ctx->msg.payload.u64);
+ ctx->fd_num = 0;
+
+ return RTE_VHOST_MSG_RESULT_REPLY;
+}
+
+static void
+_dev_invalidate_vrings(struct virtio_net **pdev)
+{
+ struct virtio_net *dev = *pdev;
+ uint32_t i;
+
+ for (i = 0; i < dev->nr_vring; i++) {
+ struct vhost_virtqueue *vq = dev->virtqueue[i];
+
+ if (!vq)
+ continue;
+
+ if (vq->desc || vq->avail || vq->used) {
+ vq_assert_lock(dev, vq);
+
+ /*
+ * If the memory table got updated, the ring addresses
+ * need to be translated again as virtual addresses have
+ * changed.
+ */
+ vring_invalidate(dev, vq);
+
+ translate_ring_addresses(&dev, &vq);
+ }
+ }
+
+ *pdev = dev;
+}
+
+/*
+ * Macro wrapper that performs the compile-time lock assertion with the
+ * correct message ID at the call site, then calls the implementation.
+ */
+#define dev_invalidate_vrings(pdev, id) do { \
+ static_assert(id ## _LOCK_ALL_QPS, \
+ #id " handler is not declared as locking all queue pairs"); \
+ _dev_invalidate_vrings(pdev); \
+} while (0)
+
+static int
+vhost_user_add_mem_reg(struct virtio_net **pdev,
+ struct vhu_msg_context *ctx,
+ int main_fd __rte_unused)
+{
+ uint32_t i;
+ struct virtio_net *dev = *pdev;
+ struct VhostUserMemoryRegion *region = &ctx->msg.payload.memreg.region;
+
+ /* convert first region add to normal memory table set */
+ if (dev->mem == NULL) {
+ if (vhost_user_initialize_memory(pdev) < 0)
+ goto close_msg_fds;
+ }
+
+ /* make sure new region will fit */
+ if (dev->mem->nregions >= VHOST_MEMORY_MAX_NREGIONS) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR, "too many memory regions already (%u)",
+ dev->mem->nregions);
+ goto close_msg_fds;
+ }
+
+ /* make sure supplied memory fd present */
+ if (ctx->fd_num != 1) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR, "fd count makes no sense (%u)", ctx->fd_num);
+ goto close_msg_fds;
+ }
+
+ /* Make sure no overlap in guest virtual address space */
+ for (i = 0; i < dev->mem->nregions; i++) {
+ struct rte_vhost_mem_region *current_region = &dev->mem->regions[i];
+ uint64_t current_region_guest_start = current_region->guest_user_addr;
+ uint64_t current_region_guest_end = current_region_guest_start
+ + current_region->size - 1;
+ uint64_t proposed_region_guest_start = region->userspace_addr;
+ uint64_t proposed_region_guest_end = proposed_region_guest_start
+ + region->memory_size - 1;
+
+ if (!((proposed_region_guest_end < current_region_guest_start) ||
+ (proposed_region_guest_start > current_region_guest_end))) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "requested memory region overlaps with another region");
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "\tRequested region address:0x%" PRIx64,
+ region->userspace_addr);
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "\tRequested region size:0x%" PRIx64,
+ region->memory_size);
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "\tOverlapping region address:0x%" PRIx64,
+ current_region->guest_user_addr);
+ VHOST_CONFIG_LOG(dev->ifname, ERR,
+ "\tOverlapping region size:0x%" PRIx64,
+ current_region->size);
+ goto close_msg_fds;
+ }
+ }
+
+ /* New region goes at the end of the contiguous array */
+ struct rte_vhost_mem_region *reg = &dev->mem->regions[dev->mem->nregions];
+
+ reg->guest_phys_addr = region->guest_phys_addr;
+ reg->guest_user_addr = region->userspace_addr;
+ reg->size = region->memory_size;
+ reg->fd = ctx->fds[0];
+ ctx->fds[0] = -1;
+
+ if (vhost_user_mmap_region(dev, reg, region->mmap_offset) < 0) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap region");
+ if (reg->mmap_addr) {
+ /* mmap succeeded but a later step (e.g. add_guest_pages)
+ * failed; undo the mapping and any guest-page entries.
+ */
+ remove_guest_pages(dev, reg);
+ free_mem_region(reg);
+ } else {
+ close(reg->fd);
+ reg->fd = -1;
+ }
+ goto close_msg_fds;
+ }
+
+ dev->mem->nregions++;
+
+ if (dev->async_copy && rte_vfio_is_enabled("vfio")) {
+ if (async_dma_map_region(dev, reg, true) < 0)
+ goto free_new_region;
+ }
+
+ if (dev->postcopy_listening) {
+ /*
+ * Cannot use vhost_user_postcopy_register() here because it
+ * reads ctx->msg.payload.memory (SET_MEM_TABLE layout), but
+ * ADD_MEM_REG uses the memreg payload. Register the
+ * single new region directly instead.
+ */
+ if (vhost_user_postcopy_region_register(dev, reg) < 0)
+ goto free_new_region;
+ }
+
+ dev_invalidate_vrings(pdev, VHOST_USER_ADD_MEM_REG);
+ dev = *pdev;
+ dump_guest_pages(dev);
+
+ return RTE_VHOST_MSG_RESULT_OK;
+
+free_new_region:
+ if (dev->async_copy && rte_vfio_is_enabled("vfio"))
+ async_dma_map_region(dev, reg, false);
+ remove_guest_pages(dev, reg);
+ free_mem_region(reg);
+ dev->mem->nregions--;
+close_msg_fds:
+ close_msg_fds(ctx);
+ return RTE_VHOST_MSG_RESULT_ERR;
+}
+
+static int
+vhost_user_rem_mem_reg(struct virtio_net **pdev,
+ struct vhu_msg_context *ctx,
+ int main_fd __rte_unused)
+{
+ uint32_t i;
+ struct virtio_net *dev = *pdev;
+ struct VhostUserMemoryRegion *region = &ctx->msg.payload.memreg.region;
+
+ if (dev->mem == NULL || dev->mem->nregions == 0) {
+ VHOST_CONFIG_LOG(dev->ifname, ERR, "no memory regions to remove");
+ close_msg_fds(ctx);
+ return RTE_VHOST_MSG_RESULT_ERR;
+ }
+
+ for (i = 0; i < dev->mem->nregions; i++) {
+ struct rte_vhost_mem_region *current_region = &dev->mem->regions[i];
+
+ /*
+ * According to the vhost-user specification:
+ * The memory region to be removed is identified by its GPA,
+ * user address and size. The mmap offset is ignored.
+ */
+ if (region->userspace_addr == current_region->guest_user_addr
+ && region->guest_phys_addr == current_region->guest_phys_addr
+ && region->memory_size == current_region->size) {
+ if (dev->async_copy && rte_vfio_is_enabled("vfio"))
+ async_dma_map_region(dev, current_region, false);
+ remove_guest_pages(dev, current_region);
+ free_mem_region(current_region);
+
+ /* Compact the regions array to keep it contiguous */
+ if (i < dev->mem->nregions - 1) {
+ memmove(&dev->mem->regions[i],
+ &dev->mem->regions[i + 1],
+ (dev->mem->nregions - 1 - i) *
+ sizeof(struct rte_vhost_mem_region));
+ memset(&dev->mem->regions[dev->mem->nregions - 1],
+ 0, sizeof(struct rte_vhost_mem_region));
+ }
+
+ dev->mem->nregions--;
+ dev_invalidate_vrings(pdev, VHOST_USER_REM_MEM_REG);
+ dev = *pdev;
+ close_msg_fds(ctx);
+ return RTE_VHOST_MSG_RESULT_OK;
+ }
+ }
+
+ VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to find region");
+ close_msg_fds(ctx);
+ return RTE_VHOST_MSG_RESULT_ERR;
+}
+
static bool
vq_is_ready(struct virtio_net *dev, struct vhost_virtqueue *vq)
{
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v13 5/5] vhost_user: enable configure memory slots
2026-05-14 2:01 [PATCH v13 0/5] Support add/remove memory region and get-max-slots pravin.bathija
` (3 preceding siblings ...)
2026-05-14 2:01 ` [PATCH v13 4/5] vhost_user: Function defs for add/rem mem regions pravin.bathija
@ 2026-05-14 2:01 ` pravin.bathija
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
4 siblings, 1 reply; 30+ messages in thread
From: pravin.bathija @ 2026-05-14 2:01 UTC (permalink / raw)
To: dev, fengchengwen, stephen, maxime.coquelin; +Cc: pravin.bathija, thomas
From: Pravin M Bathija <pravin.bathija@dell.com>
This patch enables configure memory slots in the header define
VHOST_USER_PROTOCOL_FEATURES.
Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
---
lib/vhost/vhost_user.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
index 6435816534..732aa4dc02 100644
--- a/lib/vhost/vhost_user.h
+++ b/lib/vhost/vhost_user.h
@@ -32,6 +32,7 @@
(1ULL << VHOST_USER_PROTOCOL_F_BACKEND_SEND_FD) | \
(1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER) | \
(1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT) | \
+ (1ULL << VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS) | \
(1ULL << VHOST_USER_PROTOCOL_F_STATUS))
typedef enum VhostUserRequest {
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback
2026-05-14 2:01 ` [PATCH v13 5/5] vhost_user: enable configure memory slots pravin.bathija
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 01/11] mailmap: add Jie Liu liujie5
` (10 more replies)
0 siblings, 11 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
This patch set addresses the feedback received on the v10 submission
for the sxe2 PMD. The primary focus is on fixing vector path selection,
ensuring memory safety during mbuf initialization, and cleaning up
redundant logic in the configuration functions.
v14 Changes:
- Fixed vector Rx burst function being overwritten by scalar selection.
- Refactored Rx/Tx mode set functions to seed flags from caps first,
eliminating tautological checks.
- Added memset for mbuf_def in vector init to avoid uninitialized reads.
- Converted pci_map_addr_info to designated initializers.
- Removed dead Windows-only code in meson.build.
- Added NULL checks for mbuf free for driver-wide consistency.
- Updated burst_mode_get to accurately report AVX paths.
- Adjusted SXE2_ETH_OVERHEAD to match actual VLAN capabilities.
Jie Liu (11):
mailmap: add Jie Liu
doc: add sxe2 guide and release notes
common/sxe2: add sxe2 basic structures
drivers: add base driver skeleton
drivers: add base driver probe skeleton
drivers: support PCI BAR mapping
common/sxe2: add ioctl interface for DMA map and unmap
net/sxe2: support queue setup and control
drivers: add data path for Rx and Tx
net/sxe2: add vectorized Rx and Tx
net/sxe2: implement Tx done cleanup
.mailmap | 1 +
doc/guides/nics/features/sxe2.ini | 29 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/sxe2.rst | 34 +
doc/guides/rel_notes/release_26_07.rst | 4 +
drivers/common/sxe2/meson.build | 15 +
drivers/common/sxe2/sxe2_common.c | 683 +++++++++++++
drivers/common/sxe2/sxe2_common.h | 85 ++
drivers/common/sxe2/sxe2_common_log.h | 81 ++
drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++
drivers/common/sxe2/sxe2_internal_ver.h | 33 +
drivers/common/sxe2/sxe2_ioctl_chnl.c | 325 ++++++
drivers/common/sxe2/sxe2_ioctl_chnl.h | 130 +++
drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 62 ++
drivers/common/sxe2/sxe2_osal.h | 89 ++
drivers/meson.build | 1 +
drivers/net/meson.build | 1 +
drivers/net/sxe2/meson.build | 32 +
drivers/net/sxe2/sxe2_cmd_chnl.c | 323 ++++++
drivers/net/sxe2/sxe2_cmd_chnl.h | 37 +
drivers/net/sxe2/sxe2_drv_cmd.h | 388 ++++++++
drivers/net/sxe2/sxe2_ethdev.c | 965 ++++++++++++++++++
drivers/net/sxe2/sxe2_ethdev.h | 318 ++++++
drivers/net/sxe2/sxe2_irq.h | 48 +
drivers/net/sxe2/sxe2_queue.c | 66 ++
drivers/net/sxe2/sxe2_queue.h | 195 ++++
drivers/net/sxe2/sxe2_rx.c | 559 +++++++++++
drivers/net/sxe2/sxe2_rx.h | 34 +
drivers/net/sxe2/sxe2_tx.c | 420 ++++++++
drivers/net/sxe2/sxe2_tx.h | 32 +
drivers/net/sxe2/sxe2_txrx.c | 357 +++++++
drivers/net/sxe2/sxe2_txrx.h | 23 +
drivers/net/sxe2/sxe2_txrx_common.h | 540 ++++++++++
drivers/net/sxe2/sxe2_txrx_poll.c | 1045 ++++++++++++++++++++
drivers/net/sxe2/sxe2_txrx_poll.h | 20 +
drivers/net/sxe2/sxe2_txrx_vec.c | 200 ++++
drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++
drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 +++++
drivers/net/sxe2/sxe2_txrx_vec_sse.c | 549 ++++++++++
drivers/net/sxe2/sxe2_vsi.c | 214 ++++
drivers/net/sxe2/sxe2_vsi.h | 204 ++++
41 files changed, 9157 insertions(+)
create mode 100644 doc/guides/nics/features/sxe2.ini
create mode 100644 doc/guides/nics/sxe2.rst
create mode 100644 drivers/common/sxe2/meson.build
create mode 100644 drivers/common/sxe2/sxe2_common.c
create mode 100644 drivers/common/sxe2/sxe2_common.h
create mode 100644 drivers/common/sxe2/sxe2_common_log.h
create mode 100644 drivers/common/sxe2/sxe2_host_regs.h
create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h
create mode 100644 drivers/common/sxe2/sxe2_osal.h
create mode 100644 drivers/net/sxe2/meson.build
create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c
create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h
create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h
create mode 100644 drivers/net/sxe2/sxe2_ethdev.c
create mode 100644 drivers/net/sxe2/sxe2_ethdev.h
create mode 100644 drivers/net/sxe2/sxe2_irq.h
create mode 100644 drivers/net/sxe2/sxe2_queue.c
create mode 100644 drivers/net/sxe2/sxe2_queue.h
create mode 100644 drivers/net/sxe2/sxe2_rx.c
create mode 100644 drivers/net/sxe2/sxe2_rx.h
create mode 100644 drivers/net/sxe2/sxe2_tx.c
create mode 100644 drivers/net/sxe2/sxe2_tx.h
create mode 100644 drivers/net/sxe2/sxe2_txrx.c
create mode 100644 drivers/net/sxe2/sxe2_txrx.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c
create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c
create mode 100644 drivers/net/sxe2/sxe2_vsi.c
create mode 100644 drivers/net/sxe2/sxe2_vsi.h
--
2.47.3
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v14 01/11] mailmap: add Jie Liu
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 02/11] doc: add sxe2 guide and release notes liujie5
` (9 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
.mailmap | 1 +
1 file changed, 1 insertion(+)
diff --git a/.mailmap b/.mailmap
index 895412e568..d2c4485636 100644
--- a/.mailmap
+++ b/.mailmap
@@ -739,6 +739,7 @@ Jiawen Wu <jiawenwu@trustnetic.com>
Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com>
Jie Hai <haijie1@huawei.com>
Jie Liu <jie2.liu@hxt-semitech.com>
+Jie Liu <liujie5@linkdatatechnology.com>
Jie Pan <panjie5@jd.com>
Jie Wang <jie1x.wang@intel.com>
Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com>
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 02/11] doc: add sxe2 guide and release notes
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
2026-05-16 2:55 ` [PATCH v14 01/11] mailmap: add Jie Liu liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 03/11] common/sxe2: add sxe2 basic structures liujie5
` (8 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Add a new guide for SXE2 PMD in the nics directory.
The guide contains driver capabilities, prerequisites,
and compilation/usage instructions.
Update the release notes to announce the addition of the
sxe2 network driver.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
doc/guides/nics/features/sxe2.ini | 29 ++++++++++++++++++++++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/sxe2.rst | 34 ++++++++++++++++++++++++++
doc/guides/rel_notes/release_26_07.rst | 4 +++
4 files changed, 68 insertions(+)
create mode 100644 doc/guides/nics/features/sxe2.ini
create mode 100644 doc/guides/nics/sxe2.rst
diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini
new file mode 100644
index 0000000000..7e350bab54
--- /dev/null
+++ b/doc/guides/nics/features/sxe2.ini
@@ -0,0 +1,29 @@
+;
+; Supported features of the 'sxe2' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Fast mbuf free = P
+Free Tx mbuf on demand = Y
+Burst mode info = Y
+Queue start/stop = Y
+Buffer split on Rx = P
+Scattered Rx = Y
+CRC offload = Y
+VLAN offload = Y
+QinQ offload = P
+L3 checksum offload = Y
+L4 checksum offload = Y
+Timestamp offload = P
+Inner L3 checksum = P
+Inner L4 checksum = P
+Rx descriptor status = Y
+Tx descriptor status = Y
+FreeBSD = Y
+Linux = Y
+x86-32 = Y
+x86-64 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index cb818284fe..e20be478f8 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -68,6 +68,7 @@ Network Interface Controller Drivers
rnp
sfc_efx
softnic
+ sxe2
tap
thunderx
txgbe
diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst
new file mode 100644
index 0000000000..7fcf9c085b
--- /dev/null
+++ b/doc/guides/nics/sxe2.rst
@@ -0,0 +1,34 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+
+SXE2 Poll Mode Driver
+======================
+
+The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for
+10/25/50/100 Gbps Network Adapters.
+The embedded switch, Physical Functions (PF),
+and SR-IOV Virtual Functions (VF) are supported.
+
+Implementation details
+----------------------
+
+The sxe2 PMD is designed to operate alongside the sxe2 kernel network driver.
+For management and control operations, the PMD communicates with the kernel
+driver via ioctl interfaces. These commands are processed by the kernel
+driver and subsequently dispatched to the hardware firmware for execution.
+
+For security and robustness, the driver's data path is optimized to operate
+using virtual addresses (IOVA as VA mode). However, to ensure full
+compatibility in system environments where an IOMMU is absent or disabled,
+the driver also provides an explicit path to support physical addressing
+(IOVA as PA mode).
+
+The hardware is capable of handling the corresponding IOVA addresses (either
+VA or PA) directly, as provided by the DPDK memory subsystem. This ensures
+that DPDK applications can only access memory segments explicitly allocated
+to the current process, preventing unauthorized access to random physical
+memory.
+
+This capability allows the PMD to coexist with kernel network interfaces
+which remain functional, although they stop receiving unicast packets as
+long as they share the same MAC address.
diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst
index f012d47a4b..fa0f0f5cca 100644
--- a/doc/guides/rel_notes/release_26_07.rst
+++ b/doc/guides/rel_notes/release_26_07.rst
@@ -64,6 +64,10 @@ New Features
* ``--auto-probing`` enables the initial bus probing, which is the current default behavior.
+* **Added Linkdata sxe2 ethernet driver.**
+
+ Added network driver for the Linkdata Network Adapters.
+
Removed Items
-------------
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 03/11] common/sxe2: add sxe2 basic structures
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
2026-05-16 2:55 ` [PATCH v14 01/11] mailmap: add Jie Liu liujie5
2026-05-16 2:55 ` [PATCH v14 02/11] doc: add sxe2 guide and release notes liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 04/11] drivers: add base driver skeleton liujie5
` (7 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
This patch adds the base infrastructure for the sxe2 common
library. It includes the mandatory OS abstraction layer (OSAL),
common structure definitions, error codes, and the logging
system implementation.
Specifically, this commit:
- Implements the logging stream management using RTE_LOG_LINE.
- Defines device-specific error codes and status registers.
- Adds the initial meson build configuration for the common library.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/sxe2_common_log.h | 82 +++
drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++
drivers/common/sxe2/sxe2_internal_ver.h | 33 ++
drivers/common/sxe2/sxe2_osal.h | 89 +++
4 files changed, 911 insertions(+)
create mode 100644 drivers/common/sxe2/sxe2_common_log.h
create mode 100644 drivers/common/sxe2/sxe2_host_regs.h
create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h
create mode 100644 drivers/common/sxe2/sxe2_osal.h
diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h
new file mode 100644
index 0000000000..84775de605
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_common_log.h
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_COMMON_LOG_H__
+#define __SXE2_COMMON_LOG_H__
+
+extern int32_t sxe2_common_log;
+extern int32_t sxe2_log_init;
+extern int32_t sxe2_log_driver;
+extern int32_t sxe2_log_rx;
+extern int32_t sxe2_log_tx;
+extern int32_t sxe2_log_hw;
+
+#define RTE_LOGTYPE_SXE2_COM sxe2_common_log
+#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init
+#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver
+#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx
+#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx
+#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw
+
+#define SXE2_PMD_LOG(level, log_type, ...) \
+ RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \
+ __func__, __VA_ARGS__)
+
+#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \
+ RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \
+ __func__ RTE_LOG_COMMA \
+ adapter->dev_port_id, __VA_ARGS__)
+
+#define PMD_LOG_DEBUG(logtype, fmt, ...) \
+ SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_INFO(logtype, fmt, ...) \
+ SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_NOTICE(logtype, fmt, ...) \
+ SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_WARN(logtype, fmt, ...) \
+ SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_ERR(logtype, fmt, ...) \
+ SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_CRIT(logtype, fmt, ...) \
+ SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_ALERT(logtype, fmt, ...) \
+ SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_EMERG(logtype, fmt, ...) \
+ SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>")
+
+#endif /* __SXE2_COMMON_LOG_H__ */
+
diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h
new file mode 100644
index 0000000000..984ea6214c
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_host_regs.h
@@ -0,0 +1,707 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_HOST_REGS_H__
+#define __SXE2_HOST_REGS_H__
+
+#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s))
+
+#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20))
+#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4))
+#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4))
+#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4))
+#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4))
+
+#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004
+#define SXE2_RXQ_CTRL_ENABLED 0x00000001
+#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3)
+
+#define SXE2_PCIEPROC_BASE 0x002d6000
+
+#define SXE2_PF_INT_BASE 0x00260000
+#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000)
+#define SXE2_PF_INT_ALLOC_FIRST 0x7FF
+#define SXE2_PF_INT_ALLOC_LAST_S 12
+#define SXE2_PF_INT_ALLOC_LAST \
+ (0x7FF << SXE2_PF_INT_ALLOC_LAST_S)
+#define SXE2_PF_INT_ALLOC_VALID BIT(31)
+
+#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040)
+#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0)
+#define SXE2_PF_INT_OICR_UR BIT(1)
+#define SXE2_PF_INT_OICR_CA BIT(2)
+#define SXE2_PF_INT_OICR_VFLR BIT(3)
+#define SXE2_PF_INT_OICR_VFR_DONE BIT(4)
+#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5)
+#define SXE2_PF_INT_OICR_BFDE BIT(6)
+#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7)
+#define SXE2_PF_INT_OICR_ECC_ERR BIT(8)
+#define SXE2_PF_INT_OICR_GPIO BIT(9)
+#define SXE2_PF_INT_OICR_TSYN_TX BIT(11)
+#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12)
+#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13)
+#define SXE2_PF_INT_OICR_EXHAUST BIT(14)
+#define SXE2_PF_INT_OICR_FW BIT(15)
+#define SXE2_PF_INT_OICR_SWINT BIT(16)
+#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17)
+#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18)
+#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19)
+#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20)
+#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21)
+#define SXE2_PF_INT_OICR_GRST BIT(22)
+#define SXE2_PF_INT_OICR_FWQ_INT BIT(29)
+#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30)
+#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31)
+
+#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020)
+
+#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100)
+#define SXE2_PF_INT_FW_ABNORMAL BIT(0)
+#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1)
+#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18)
+#define SXE2_PF_INT_VFLR_DONE BIT(2)
+
+#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060)
+#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF
+#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11
+#define SXE2_PF_INT_OICR_CTL_ITR_IDX \
+ (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S)
+#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30)
+
+#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0)
+#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF
+#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11
+#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \
+ (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S)
+#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30)
+
+#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0)
+#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF
+#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11
+#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S)
+#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30)
+
+#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100)
+#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x)
+
+#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120)
+#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7
+#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2)
+#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4)
+#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4)
+#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8)
+
+#define SXE2_VFG_RAM_INIT_DONE \
+ (SXE2_PF_INT_BASE + 0x0128)
+#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0)
+#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1)
+#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2)
+
+#define SXE2_LINK_REG_GET_10G_VALUE 4
+#define SXE2_LINK_REG_GET_25G_VALUE 1
+#define SXE2_LINK_REG_GET_50G_VALUE 2
+#define SXE2_LINK_REG_GET_100G_VALUE 3
+
+#define SXE2_PORT0_CNT 0
+#define SXE2_PORT1_CNT 1
+#define SXE2_PORT2_CNT 2
+#define SXE2_PORT3_CNT 3
+
+#define SXE2_LINK_STATUS_BASE (0x002ac200)
+#define SXE2_LINK_STATUS_PORT0_POS 3
+#define SXE2_LINK_STATUS_PORT1_POS 11
+#define SXE2_LINK_STATUS_PORT2_POS 19
+#define SXE2_LINK_STATUS_PORT3_POS 27
+#define SXE2_LINK_STATUS_MASK 1
+
+#define SXE2_LINK_SPEED_BASE (0x002ac200)
+#define SXE2_LINK_SPEED_PORT0_POS 0
+#define SXE2_LINK_SPEED_PORT1_POS 8
+#define SXE2_LINK_SPEED_PORT2_POS 16
+#define SXE2_LINK_SPEED_PORT3_POS 24
+#define SXE2_LINK_SPEED_MASK 7
+
+#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4))
+#define SXE2_PFVP_INT_ALLOC_FIRST_S 0
+
+#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S)
+#define SXE2_PFVP_INT_ALLOC_LAST_S 12
+#define SXE2_PFVP_INT_ALLOC_LAST_M \
+ (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S)
+#define SXE2_PFVP_INT_ALLOC_VALID BIT(31)
+
+#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4))
+#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0
+
+#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S)
+#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12
+
+#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \
+ (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S)
+#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31)
+
+#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4))
+#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0
+#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S)
+#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12
+#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S)
+#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16
+#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16)
+
+#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4))
+#define SXE2_VSI_PF_ID_S 0
+#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S)
+#define SXE2_VSI_PF_EN_M BIT(3)
+
+#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4))
+#define SXE2_MBX_CTL_MSIX_INDX_S 0
+#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S)
+#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30)
+
+#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx))
+#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF
+#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11
+#define SXE2_PF_INT_TQCTL_ITR_IDX \
+ (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S)
+#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30)
+
+#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx))
+#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF
+#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11
+#define SXE2_PF_INT_RQCTL_ITR_IDX \
+ (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S)
+#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30)
+
+#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx))
+#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F)
+#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \
+ (0x3F)
+#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6))
+#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7)
+#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \
+ (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT)
+
+#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \
+ (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx))
+#define SXE2_VF_INT_ITR_INTERVAL 0xFFF
+
+#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx))
+#define SXE2_VF_DYN_CTL_INTENABLE BIT(0)
+#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1)
+#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2)
+#define SXE2_VF_DYN_CTL_ITR_IDX_S \
+ 3
+#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3
+#define SXE2_VF_DYN_CTL_INTERVAL_S 5
+#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF
+#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24)
+#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25
+#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3
+
+#define SXE2_VF_DYN_CTL_INTENABLE_MSK \
+ BIT(31)
+
+#define SXE2_BAR4_MSIX_BASE 0
+#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10))
+#define SXE2_BAR4_MSIX_ENABLE 0
+#define SXE2_BAR4_MSIX_DISABLE 1
+
+#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4))
+
+#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT7_HEAD_S 0
+#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S)
+#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16
+#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S)
+
+#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100))
+
+#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800
+#define SXE2_TXQ_CTRL_SW_EN_M BIT(0)
+#define SXE2_TXQ_CTRL_HW_EN_M BIT(1)
+
+#define SXE2_TXQ_CTXT2_PROT_IDX_S 0
+#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0)
+#define SXE2_TXQ_CTXT2_CGD_IDX_S 4
+#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4)
+#define SXE2_TXQ_CTXT2_PF_IDX_S 9
+#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9)
+#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12
+#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12)
+#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23
+#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23)
+#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25
+#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25)
+#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26
+#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26)
+#define SXE2_TXQ_CTXT2_WB_MODE_S 27
+#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27)
+#define SXE2_TXQ_CTXT2_ITR_WB_S 28
+#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28)
+#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29
+#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29)
+#define SXE2_TXQ_CTXT2_SSO_EN_S 30
+#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30)
+
+#define SXE2_TXQ_CTXT3_SRC_VSI_S 0
+#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0)
+#define SXE2_TXQ_CTXT3_CPU_ID_S 12
+#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12)
+#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20
+#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20)
+#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21
+#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21)
+#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22
+#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22)
+
+#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0
+#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0)
+#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13
+#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13)
+#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14
+#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14)
+#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15
+#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15)
+#define SXE2_TXQ_CTXT3_QLEN_S 16
+#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16)
+
+#define SXE2_RX_BUF_CHAINED_MAX 10
+#define SXE2_RX_DESC_BASE_ADDR_UNIT 7
+#define SXE2_RX_HBUF_LEN_UNIT 6
+#define SXE2_RX_DBUF_LEN_UNIT 7
+#define SXE2_RX_DBUF_LEN_MASK (~0x7F)
+#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7)
+
+enum {
+ SXE2_RX_CTXT0 = 0,
+ SXE2_RX_CTXT1,
+ SXE2_RX_CTXT2,
+ SXE2_RX_CTXT3,
+ SXE2_RX_CTXT4,
+ SXE2_RX_CTXT_CNT,
+};
+
+#define SXE2_RX_CTXT_BASE_L_S 0
+#define SXE2_RX_CTXT_BASE_L_W 32
+
+#define SXE2_RX_CTXT_BASE_H_S 0
+#define SXE2_RX_CTXT_BASE_H_W 25
+#define SXE2_RX_CTXT_DEPTH_L_S 25
+#define SXE2_RX_CTXT_DEPTH_L_W 7
+
+#define SXE2_RX_CTXT_DEPTH_H_S 0
+#define SXE2_RX_CTXT_DEPTH_H_W 6
+
+#define SXE2_RX_CTXT_DBUFF_S 6
+#define SXE2_RX_CTXT_DBUFF_W 7
+
+#define SXE2_RX_CTXT_HBUFF_S 13
+#define SXE2_RX_CTXT_HBUFF_W 5
+
+#define SXE2_RX_CTXT_HSPLT_TYPE_S 18
+#define SXE2_RX_CTXT_HSPLT_TYPE_W 2
+
+#define SXE2_RX_CTXT_DESC_TYPE_S 20
+#define SXE2_RX_CTXT_DESC_TYPE_W 1
+
+#define SXE2_RX_CTXT_CRC_S 21
+#define SXE2_RX_CTXT_CRC_W 1
+
+#define SXE2_RX_CTXT_L2TAG_FLAG_S 23
+#define SXE2_RX_CTXT_L2TAG_FLAG_W 1
+
+#define SXE2_RX_CTXT_HSPLT_0_S 24
+#define SXE2_RX_CTXT_HSPLT_0_W 4
+
+#define SXE2_RX_CTXT_HSPLT_1_S 28
+#define SXE2_RX_CTXT_HSPLT_1_W 2
+
+#define SXE2_RX_CTXT_INVALN_STP_S 31
+#define SXE2_RX_CTXT_INVALN_STP_W 1
+
+#define SXE2_RX_CTXT_LRO_ENABLE_S 0
+#define SXE2_RX_CTXT_LRO_ENABLE_W 1
+
+#define SXE2_RX_CTXT_CPUID_S 3
+#define SXE2_RX_CTXT_CPUID_W 8
+
+#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11
+#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14
+
+#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25
+#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4
+
+#define SXE2_RX_CTXT_RELAX_DATA_S 29
+#define SXE2_RX_CTXT_RELAX_DATA_W 1
+
+#define SXE2_RX_CTXT_RELAX_WB_S 30
+#define SXE2_RX_CTXT_RELAX_WB_W 1
+
+#define SXE2_RX_CTXT_RELAX_RD_S 31
+#define SXE2_RX_CTXT_RELAX_RD_W 1
+
+#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1
+#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1
+
+#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2
+#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1
+
+#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3
+#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1
+
+#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4
+#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1
+
+#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6
+#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3
+
+#define SXE2_RX_CTXT_VF_ID_S 9
+#define SXE2_RX_CTXT_VF_ID_W 8
+
+#define SXE2_RX_CTXT_PF_ID_S 17
+#define SXE2_RX_CTXT_PF_ID_W 3
+
+#define SXE2_RX_CTXT_VF_ENABLE_S 20
+#define SXE2_RX_CTXT_VF_ENABLE_W 1
+
+#define SXE2_RX_CTXT_VSI_ID_S 21
+#define SXE2_RX_CTXT_VSI_ID_W 10
+
+#define SXE2_PF_CTRLQ_FW_BASE 0x00312000
+#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000)
+#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080)
+#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100)
+#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180)
+#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200)
+#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280)
+#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300)
+#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380)
+#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400)
+#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480)
+
+#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000
+#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100)
+#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180)
+#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200)
+#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280)
+#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300)
+#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380)
+#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400)
+#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480)
+#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500)
+#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580)
+
+#define SXE2_CMD_REG_LEN_M 0x3FF
+#define SXE2_CMD_REG_LEN_VFE_M BIT(28)
+#define SXE2_CMD_REG_LEN_OVFL_M BIT(29)
+#define SXE2_CMD_REG_LEN_CRIT_M BIT(30)
+#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31)
+
+#define SXE2_CMD_REG_HEAD_M 0x3FF
+
+#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500)
+#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0)
+#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1)
+
+#define SXE2_TOP_CFG_BASE 0x00292000
+#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c)
+#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0)
+
+#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214)
+#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0)
+#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8)
+#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16)
+#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24)
+#define SXE2_FW_VER_FIX_SHIFT (8)
+#define SXE2_FW_VER_SUB_SHIFT (16)
+#define SXE2_FW_VER_MAIN_SHIFT (24)
+
+#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c)
+
+#define SXE2_STATUS SXE2_FW_VER
+
+#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210)
+
+#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218)
+
+#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c)
+#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0)
+#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0)
+
+#define SXE2_TX_OE_BASE 0x00030000
+#define SXE2_RX_OE_BASE 0x00050000
+
+#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4))
+#define SXE2_VSI_L2TAGSTXVALID(_i) \
+ (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4))
+#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4))
+#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4))
+#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4))
+#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4))
+
+#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4))
+#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4))
+#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4))
+
+#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4))
+#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8))
+#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8))
+#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8))
+#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8))
+#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8))
+#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8))
+
+#define SXE2_L2TAG_ID_STAG 0
+#define SXE2_L2TAG_ID_OUT_VLAN1 1
+#define SXE2_L2TAG_ID_OUT_VLAN2 2
+#define SXE2_L2TAG_ID_VLAN 3
+
+#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF
+#define SXE2_PFP_L2TAGSEN_DVM BIT(10)
+
+#define SXE2_VSI_TSR_STRIP_TAG_S 0
+#define SXE2_VSI_TSR_SHOW_TAG_S 4
+
+#define SXE2_VSI_TSR_ID_STAG BIT(0)
+#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1)
+#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2)
+#define SXE2_VSI_TSR_ID_VLAN BIT(3)
+
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3)
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7)
+#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16
+#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19)
+#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20
+#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23)
+
+#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0
+#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2
+#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3
+#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4
+
+#define SXE2_SWITCH_OG_BASE 0x00140000
+#define SXE2_SWITCH_SWE_BASE 0x00150000
+#define SXE2_SWITCH_RG_BASE 0x00160000
+
+#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4))
+#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4))
+
+#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9)
+
+#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1)
+#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2)
+#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3)
+#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9)
+
+#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16)
+
+#define SXE2_PCIE_SYS_READY 0x38c
+#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0)
+#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2)
+#define SXE2_PCIE_SYS_READY_R5 BIT(3)
+#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16)
+
+#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78
+#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21)
+
+#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630)
+#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586
+
+#define SXE2_PFGEN_CTRL (0x00336000)
+#define SXE2_PFGEN_CTRL_PFSWR BIT(0)
+
+#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4))
+#define SXE2_VFGEN_CTRL_VFSWR BIT(0)
+
+#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10))
+
+#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4))
+
+#define SXE2_ACCEPT_RULE_TAGGED_S 0
+#define SXE2_ACCEPT_RULE_UNTAGGED_S 16
+
+#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4))
+#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0
+#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S)
+#define SXE2_VF_RXQ_BASE_Q_NUM_S 16
+#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S)
+
+#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4))
+#define SXE2_VF_RXQ_MAPENA_M BIT(0)
+
+#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4))
+#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0
+#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S)
+#define SXE2_VF_TXQ_BASE_Q_NUM_S 16
+#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S)
+
+#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4))
+#define SXE2_VF_TXQ_MAPENA_M BIT(0)
+
+#define PRI_PTP_BASEADDR 0x2a8000
+
+#define GLTSYN (PRI_PTP_BASEADDR + 0x0)
+#define GLTSYN_ENA_M BIT(0)
+
+#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4)
+#define GLTSYN_CMD_INIT_TIME 0x01
+#define GLTSYN_CMD_INIT_INCVAL 0x02
+#define GLTSYN_CMD_ADJ_TIME 0x04
+#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C
+#define GLTSYN_CMD_LATCHING_SHTIME 0x80
+
+#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8)
+#define GLTSYN_SYNC_PLUS_1NS 0x1
+#define GLTSYN_SYNC_MINUS_1NS 0x2
+#define GLTSYN_SYNC_EXEC 0x3
+#define GLTSYN_SYNC_GEN_PULSE 0x4
+
+#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC)
+#define GLTSYN_SEM_BUSY_M BIT(0)
+
+#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10)
+#define GLTSYN_STAT_EVENT0_M BIT(0)
+#define GLTSYN_STAT_EVENT1_M BIT(1)
+#define GLTSYN_STAT_EVENT2_M BIT(2)
+
+#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20)
+#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24)
+#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28)
+#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C)
+
+#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30)
+#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34)
+#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38)
+#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C)
+
+#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40)
+#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44)
+
+#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50)
+#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54)
+
+#define GLTSYN_TGT_NS(_i) \
+ (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16))
+#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16))
+#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16))
+
+#define GLTSYN_EVENT_NS(_i) \
+ (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16))
+
+#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16))
+#define GLTSYN_EVENT_S_H_MASK (0xFFFF)
+
+#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16))
+
+#define GLTSYN_AUXOUT(_i) \
+ (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4))
+#define GLTSYN_AUXOUT_OUT_ENA BIT(0)
+#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1)
+#define GLTSYN_AUXOUT_OUTLVL BIT(3)
+#define GLTSYN_AUXOUT_INT_ENA BIT(4)
+#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3)
+
+#define GLTSYN_CLKO(_i) \
+ (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4))
+
+#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4))
+#define GLTSYN_AUXIN_RISING_EDGE BIT(0)
+#define GLTSYN_AUXIN_FALLING_EDGE BIT(1)
+#define GLTSYN_AUXIN_ENABLE BIT(4)
+
+#define CGMAC_CSR_BASE 0x2B4000
+
+#define CGMAC_PORT_OFFSET 0x00004000
+
+#define PFP_CGM_TX_TSMEM(_port, _i) \
+ (CGMAC_CSR_BASE + 0x100 + \
+ + CGMAC_PORT_OFFSET * _port + ((_i) * 4))
+
+#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8))
+#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8))
+
+#define CGMAC_CSR_MAC0_OFFSET 0x2B4000
+#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000))
+
+#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \
+ (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \
+ ((_i) * 4))
+
+#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8))
+#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8))
+
+#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0)
+#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11
+#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11)
+#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30)
+#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4))
+
+#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0)
+#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11
+#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11)
+#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30)
+#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4))
+
+#define SXE2_IPSEC_TX_BASE (0x2A0000)
+#define SXE2_IPSEC_RX_BASE (0x2A2000)
+
+#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084)
+#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000)
+#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18)
+#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000)
+#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17)
+#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000)
+#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4)
+#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0)
+#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2)
+#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c)
+
+#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088)
+#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0)
+#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff)
+
+#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c)
+#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0)
+#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff)
+
+#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090)
+#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff)
+
+#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000)
+#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894)
+#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18)
+#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
+ (0x0a20 + 8 * (pri)))
+#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
+ (0x0a60 + 8 * (pri)))
+#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
+ (0x0aa0 + 8 * (pri)))
+#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988)
+#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28)
+#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
+ (0x0b30 + 8 * (pri)))
+#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
+ (0x0b70 + 8 * (pri)))
+
+#endif
diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h
new file mode 100644
index 0000000000..92f49e7a20
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_internal_ver.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_INTERNAL_VER_H__
+#define __SXE2_INTERNAL_VER_H__
+
+#define SXE2_VER_MAJOR_OFFSET (16)
+#define SXE2_MK_VER(major, minor) \
+ (major << SXE2_VER_MAJOR_OFFSET | minor)
+#define SXE2_MK_VER_MAJOR(ver) (((ver) >> SXE2_VER_MAJOR_OFFSET) & 0xff)
+#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff)
+
+#define SXE2_ITR_VER_MAJOR_V100 1
+#define SXE2_ITR_VER_MAJOR_V200 2
+
+#define SXE2_ITR_VER_MAJOR 1
+#define SXE2_ITR_VER_MINOR 1
+#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR)
+
+#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100)
+#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200)
+
+#define SXE2LIB_ITR_VER_MAJOR 1
+#define SXE2LIB_ITR_VER_MINOR 1
+#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR)
+
+#define SXE2_DRV_CLI_VER_MAJOR 1
+#define SXE2_DRV_CLI_VER_MINOR 1
+#define SXE2_DRV_CLI_VER \
+ SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR)
+
+#endif /* __SXE2_INTERNAL_VER_H__ */
diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h
new file mode 100644
index 0000000000..3bea8fbf85
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_osal.h
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_OSAL_H__
+#define __SXE2_OSAL_H__
+#include <string.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_malloc.h>
+#include <rte_ether.h>
+#include <rte_version.h>
+#include <rte_bitops.h>
+
+#ifndef __BITS_PER_LONG
+#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8)
+#endif
+#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG)
+#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG))
+
+#define BITS_PER_BYTE 8
+
+#define IS_UNICAST_ETHER_ADDR(addr) \
+ ((bool)((((uint8_t *)(addr))[0] % ((uint8_t)0x2)) == 0))
+
+#define STRUCT_SIZE(ptr, field, num) \
+ (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num))
+
+#ifndef TAILQ_FOREACH_SAFE
+#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+ for ((var) = TAILQ_FIRST((head)); \
+ (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+ (var) = (tvar))
+#endif
+
+#define SXE2_QUEUE_WAIT_RETRY_CNT (50)
+
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((uint32_t)((n) & 0xffffffff))
+
+#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f)
+#define ARRAY_SIZE(arr) RTE_DIM(arr)
+
+#ifndef DIV_ROUND_UP
+#define DIV_ROUND_UP(n, d) \
+ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d))
+#endif
+
+enum sxe2_itr_idx {
+ SXE2_ITR_IDX_0 = 0,
+ SXE2_ITR_IDX_1,
+ SXE2_ITR_IDX_2,
+ SXE2_ITR_IDX_NONE,
+};
+
+#define ETH_P_8021Q 0x8100
+#define ETH_P_8021AD 0x88a8
+#define ETH_P_QINQ1 0x9100
+
+#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long))
+#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32)
+
+#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1)))
+
+#define DECLARE_BITMAP(name, bits) \
+ unsigned long name[BITS_TO_LONGS(bits)]
+#define BITMAP_TYPE unsigned long
+
+static inline void sxe2_set_bit(uint32_t nr, unsigned long *addr)
+{
+ addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG);
+}
+
+static inline void sxe2_clear_bit(uint32_t nr, unsigned long *addr)
+{
+ addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG));
+}
+
+static inline uint32_t sxe2_test_bit(uint32_t nr, const volatile unsigned long *addr)
+{
+ return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1)));
+}
+
+#endif /* __SXE2_OSAL_H */
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 04/11] drivers: add base driver skeleton
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (2 preceding siblings ...)
2026-05-16 2:55 ` [PATCH v14 03/11] common/sxe2: add sxe2 basic structures liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 05/11] drivers: add base driver probe skeleton liujie5
` (6 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Initialize the sxe2 PMD skeleton by implementing the PCI probe and
remove functions. This includes the setup and cleanup of a character
device used for control path communication between the user space
and the hardware.
The character device provides an interface for ioctl-based management
operations, supporting device-specific configuration.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/meson.build | 15 +
drivers/common/sxe2/sxe2_common.c | 635 +++++++++++++++++++++
drivers/common/sxe2/sxe2_common.h | 85 +++
drivers/common/sxe2/sxe2_common_log.h | 1 -
drivers/common/sxe2/sxe2_host_regs.h | 226 ++++----
drivers/common/sxe2/sxe2_ioctl_chnl.c | 160 ++++++
drivers/common/sxe2/sxe2_ioctl_chnl.h | 130 +++++
drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 44 ++
drivers/common/sxe2/sxe2_osal.h | 2 +-
drivers/meson.build | 1 +
10 files changed, 1184 insertions(+), 115 deletions(-)
create mode 100644 drivers/common/sxe2/meson.build
create mode 100644 drivers/common/sxe2/sxe2_common.c
create mode 100644 drivers/common/sxe2/sxe2_common.h
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h
diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build
new file mode 100644
index 0000000000..f1cc1205a0
--- /dev/null
+++ b/drivers/common/sxe2/meson.build
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright (c) 2023 Corigine, Inc.
+
+if is_windows
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+deps += ['bus_pci', 'net', 'eal', 'ethdev']
+
+sources = files(
+ 'sxe2_common.c',
+ 'sxe2_ioctl_chnl.c',
+)
diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c
new file mode 100644
index 0000000000..27c33b1186
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_common.c
@@ -0,0 +1,635 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_version.h>
+#include <rte_pci.h>
+#include <rte_dev.h>
+#include <rte_devargs.h>
+#include <rte_class.h>
+#include <rte_malloc.h>
+#include <rte_errno.h>
+#include <rte_fbarray.h>
+#include <rte_eal.h>
+#include <eal_private.h>
+#include <eal_memcfg.h>
+#include <bus_driver.h>
+#include <bus_pci_driver.h>
+#include <eal_export.h>
+#include <pthread.h>
+
+#include "sxe2_common.h"
+#include "sxe2_common_log.h"
+#include "sxe2_ioctl_chnl_func.h"
+
+static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list =
+ TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list);
+
+static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list =
+ TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list);
+
+static pthread_mutex_t sxe2_common_devices_list_lock;
+
+static struct rte_pci_id *sxe2_common_pci_id_table;
+
+static const struct {
+ const char *name;
+ uint32_t class_type;
+} sxe2_class_types[] = {
+ { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH },
+ { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA },
+};
+
+static uint32_t sxe2_class_name_to_value(const char *class_name)
+{
+ uint32_t class_type = SXE2_CLASS_TYPE_INVALID;
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(sxe2_class_types); i++) {
+ if (strcmp(class_name, sxe2_class_types[i].name) == 0)
+ class_type = sxe2_class_types[i].class_type;
+ }
+
+ return class_type;
+}
+
+static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev)
+{
+ struct sxe2_common_device *cdev = NULL;
+
+ TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) {
+ if (rte_dev == cdev->dev)
+ goto l_end;
+ }
+
+ cdev = NULL;
+l_end:
+ return cdev;
+}
+
+static struct sxe2_class_driver *sxe2_class_driver_get(uint32_t class_type)
+{
+ struct sxe2_class_driver *cdrv = NULL;
+
+ TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) {
+ if (cdrv->drv_class == class_type)
+ goto l_end;
+ }
+
+ cdrv = NULL;
+l_end:
+ return cdrv;
+}
+
+static int32_t sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info,
+ const struct rte_devargs *devargs)
+{
+ const struct rte_kvargs_pair *pair;
+ struct rte_kvargs *kvlist;
+ int32_t ret = -1;
+ uint32_t i;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ for (i = 0; i < kvlist->count; i++) {
+ pair = &kvlist->pairs[i];
+ if (pair->value == NULL || *(pair->value) == '\0') {
+ PMD_LOG_ERR(COM, "Key %s has no value.", pair->key);
+ rte_kvargs_free(kvlist);
+ ret = -EINVAL;
+ goto l_end;
+ }
+ }
+
+ kv_info->kvlist = kvlist;
+ ret = 0;
+ PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.",
+ kv_info->kvlist->count);
+l_end:
+ return ret;
+}
+
+static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info)
+{
+ if ((kv_info != NULL) && (kv_info->kvlist != NULL)) {
+ rte_kvargs_free(kv_info->kvlist);
+ kv_info->kvlist = NULL;
+ }
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process)
+int32_t
+sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info,
+ const char *const key_match, arg_handler_t handler, void *opaque_arg)
+{
+ const struct rte_kvargs_pair *pair;
+ struct rte_kvargs *kvlist;
+ uint32_t i;
+ int32_t ret = 0;
+
+ if ((kv_info == NULL) || (kv_info->kvlist == NULL) ||
+ (key_match == NULL)) {
+ PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter.");
+ ret = -EINVAL;
+ goto l_end;
+ }
+ kvlist = kv_info->kvlist;
+
+ for (i = 0; i < kvlist->count; i++) {
+ pair = &kvlist->pairs[i];
+ if (strcmp(pair->key, key_match) == 0) {
+ ret = (*handler)(pair->key, pair->value, opaque_arg);
+ if (ret)
+ goto l_end;
+
+ kv_info->is_used[i] = true;
+ break;
+ }
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_parse_class_type(const char *key, const char *value, void *args)
+{
+ uint32_t *class_type = (uint32_t *)args;
+ int32_t ret = 0;
+
+ *class_type = sxe2_class_name_to_value(value);
+ if (*class_type == SXE2_CLASS_TYPE_INVALID) {
+ ret = -EINVAL;
+ PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value);
+ }
+
+ return ret;
+}
+
+static int32_t sxe2_common_device_setup(struct sxe2_common_device *cdev)
+{
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev);
+ int32_t ret = 0;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ goto l_end;
+
+ ret = sxe2_drv_dev_open(cdev, pci_dev);
+ if (ret != 0) {
+ PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret);
+ goto l_end;
+ }
+
+ ret = sxe2_drv_dev_handshark(cdev);
+ if (ret != 0) {
+ PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret);
+ goto l_close_dev;
+ }
+
+ goto l_end;
+
+l_close_dev:
+ sxe2_drv_dev_close(cdev);
+l_end:
+ return ret;
+}
+
+static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev)
+{
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ if (TAILQ_EMPTY(&sxe2_common_devices_list))
+ (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL);
+
+ sxe2_drv_dev_close(cdev);
+}
+
+static struct sxe2_common_device *sxe2_common_device_alloc(
+ struct rte_device *rte_dev, uint32_t class_type)
+{
+ struct sxe2_common_device *cdev = NULL;
+
+ cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0);
+ if (cdev == NULL) {
+ PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device.");
+ goto l_end;
+ }
+ cdev->dev = rte_dev;
+ cdev->class_type = class_type;
+ cdev->config.kernel_reset = false;
+ pthread_mutex_init(&cdev->config.lock, NULL);
+
+ (void)pthread_mutex_lock(&sxe2_common_devices_list_lock);
+ TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next);
+ pthread_mutex_unlock(&sxe2_common_devices_list_lock);
+
+l_end:
+ return cdev;
+}
+
+static void sxe2_common_device_free(struct sxe2_common_device *cdev)
+{
+
+ (void)pthread_mutex_lock(&sxe2_common_devices_list_lock);
+ TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next);
+ pthread_mutex_unlock(&sxe2_common_devices_list_lock);
+
+ rte_free(cdev);
+}
+
+static bool sxe2_dev_is_pci(const struct rte_device *dev)
+{
+ return strcmp(dev->bus->name, "pci") == 0;
+}
+
+static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv,
+ const struct rte_device *dev)
+{
+ const struct rte_pci_device *pci_dev;
+ const struct rte_pci_id *id_table;
+ bool ret = false;
+
+ if (!sxe2_dev_is_pci(dev)) {
+ PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name);
+ goto l_end;
+ }
+
+ pci_dev = RTE_DEV_TO_PCI_CONST(dev);
+ for (id_table = cdrv->id_table; id_table->vendor_id != 0;
+ id_table++) {
+
+ if (id_table->vendor_id != pci_dev->id.vendor_id &&
+ id_table->vendor_id != RTE_PCI_ANY_ID) {
+ continue;
+ }
+ if (id_table->device_id != pci_dev->id.device_id &&
+ id_table->device_id != RTE_PCI_ANY_ID) {
+ continue;
+ }
+ if (id_table->subsystem_vendor_id !=
+ pci_dev->id.subsystem_vendor_id &&
+ id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) {
+ continue;
+ }
+ if (id_table->subsystem_device_id !=
+ pci_dev->id.subsystem_device_id &&
+ id_table->subsystem_device_id != RTE_PCI_ANY_ID) {
+
+ continue;
+ }
+ if (id_table->class_id != pci_dev->id.class_id &&
+ id_table->class_id != RTE_CLASS_ANY_ID) {
+ continue;
+ }
+ ret = true;
+ break;
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_classes_driver_probe(struct sxe2_common_device *cdev,
+ struct sxe2_dev_kvargs_info *kv_info, uint32_t class_type)
+{
+ struct sxe2_class_driver *cdrv = NULL;
+ int32_t ret = -1;
+
+ cdrv = sxe2_class_driver_get(class_type);
+ if (cdrv == NULL) {
+ PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type);
+ goto l_end;
+ }
+
+ if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) {
+ PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name);
+ goto l_end;
+ }
+
+ ret = cdrv->probe(cdev, kv_info);
+ if (ret) {
+
+ PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name);
+ goto l_end;
+ }
+
+ cdev->cdrv = cdrv;
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_classes_driver_remove(struct sxe2_common_device *cdev)
+{
+ struct sxe2_class_driver *cdrv = cdev->cdrv;
+
+ return cdrv->remove(cdev);
+}
+
+static int32_t sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info)
+{
+ int32_t ret = 0;
+ uint32_t i;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ goto l_end;
+
+ if (kv_info == NULL)
+ goto l_end;
+
+ for (i = 0; i < kv_info->kvlist->count; i++) {
+ if (kv_info->is_used[i] == 0) {
+ PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.",
+ kv_info->kvlist->pairs[i].key);
+ ret = -EINVAL;
+ goto l_end;
+ }
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ struct rte_device *rte_dev = &pci_dev->device;
+ struct sxe2_common_device *cdev;
+ struct sxe2_dev_kvargs_info *kv_info_p = NULL;
+
+ uint32_t class_type = SXE2_CLASS_TYPE_ETH;
+ int32_t ret = -1;
+
+ PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name);
+
+ cdev = sxe2_rtedev_to_cdev(rte_dev);
+ if (cdev != NULL) {
+ PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name);
+ ret = -EBUSY;
+ goto l_end;
+ }
+
+ if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) {
+ kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info));
+ if (!kv_info_p) {
+ PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info");
+ goto l_end;
+ }
+
+ ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Unsupported device args: %s",
+ rte_dev->devargs->args);
+ goto l_free_kvargs;
+ }
+
+ ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS,
+ sxe2_parse_class_type, &class_type);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s",
+ rte_dev->devargs->args);
+ goto l_free_args;
+ }
+
+ }
+
+ cdev = sxe2_common_device_alloc(rte_dev, class_type);
+ if (cdev == NULL) {
+ ret = -ENOMEM;
+ goto l_free_args;
+ }
+
+ ret = sxe2_common_device_setup(cdev);
+ if (ret != 0)
+ goto l_err_setup;
+
+ ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type);
+ if (ret != 0)
+ goto l_err_probe;
+
+ ret = sxe2_kvargs_validate(kv_info_p);
+ if (ret != 0) {
+ PMD_LOG_ERR(COM, "Device args validate failed: %s",
+ rte_dev->devargs->args);
+ goto l_err_valid;
+ }
+ cdev->kvargs = kv_info_p;
+
+ goto l_end;
+l_err_valid:
+ (void)sxe2_classes_driver_remove(cdev);
+l_err_probe:
+ sxe2_common_device_cleanup(cdev);
+l_err_setup:
+ sxe2_common_device_free(cdev);
+l_free_args:
+ sxe2_kvargs_free(kv_info_p);
+l_free_kvargs:
+ free(kv_info_p);
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_common_pci_remove(struct rte_pci_device *pci_dev)
+{
+ struct sxe2_common_device *cdev;
+ int32_t ret = -1;
+
+ PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name);
+ cdev = sxe2_rtedev_to_cdev(&pci_dev->device);
+ if (cdev == NULL) {
+ ret = -ENODEV;
+ PMD_LOG_ERR(COM, "Fail to get remove device.");
+ goto l_end;
+ }
+
+ ret = sxe2_classes_driver_remove(cdev);
+ if (ret != 0) {
+ PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name);
+ goto l_end;
+ }
+
+ sxe2_common_device_cleanup(cdev);
+
+ if (cdev->kvargs != NULL) {
+ sxe2_kvargs_free(cdev->kvargs);
+ free(cdev->kvargs);
+ cdev->kvargs = NULL;
+ }
+
+ sxe2_common_device_free(cdev);
+
+l_end:
+ return ret;
+}
+
+static struct rte_pci_driver sxe2_common_pci_driver = {
+ .driver = {
+ .name = SXE2_COMMON_PCI_DRIVER_NAME,
+ },
+ .probe = sxe2_common_pci_probe,
+ .remove = sxe2_common_pci_remove,
+};
+
+static uint32_t sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table)
+{
+ uint32_t table_size = 0;
+
+ while (id_table->vendor_id != 0) {
+ table_size++;
+ id_table++;
+ }
+
+ return table_size;
+}
+
+static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id,
+ const struct rte_pci_id *id_table, uint32_t next_idx)
+{
+ int32_t current_size = next_idx - 1;
+ int32_t i;
+ bool exists = false;
+
+ for (i = 0; i < current_size; i++) {
+ if ((id->device_id == id_table[i].device_id) &&
+ (id->vendor_id == id_table[i].vendor_id) &&
+ (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) &&
+ (id->subsystem_device_id == id_table[i].subsystem_device_id)) {
+ exists = true;
+ break;
+ }
+ }
+
+ return exists;
+}
+
+static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table,
+ uint32_t *next_idx, const struct rte_pci_id *insert_table)
+{
+ for (; insert_table->vendor_id != 0; insert_table++) {
+ if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) {
+
+ id_table[*next_idx] = *insert_table;
+ (*next_idx)++;
+ }
+ }
+}
+
+static int32_t sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table)
+{
+ const struct rte_pci_id *id_iter;
+ struct rte_pci_id *updated_table;
+ struct rte_pci_id *old_table;
+ uint32_t num_ids = 0;
+ uint32_t i = 0;
+ int32_t ret = 0;
+
+ old_table = sxe2_common_pci_id_table;
+ if (old_table)
+ num_ids = sxe2_common_pci_id_table_size_get(old_table);
+
+ num_ids += sxe2_common_pci_id_table_size_get(id_table);
+
+ num_ids += 1;
+
+ updated_table = calloc(num_ids, sizeof(*updated_table));
+ if (!updated_table) {
+ PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table");
+ goto l_end;
+ }
+
+ if (old_table == NULL) {
+
+ for (id_iter = id_table; id_iter->vendor_id != 0;
+ id_iter++, i++)
+ updated_table[i] = *id_iter;
+ } else {
+
+ for (id_iter = old_table; id_iter->vendor_id != 0;
+ id_iter++, i++)
+ updated_table[i] = *id_iter;
+
+ sxe2_common_pci_id_insert(updated_table, &i, id_table);
+ }
+
+ updated_table[i].vendor_id = 0;
+ sxe2_common_pci_driver.id_table = updated_table;
+ sxe2_common_pci_id_table = updated_table;
+ free(old_table);
+
+l_end:
+ return ret;
+}
+
+static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver)
+{
+ if (driver->id_table != NULL) {
+ if (sxe2_common_pci_id_table_update(driver->id_table) != 0)
+ return;
+ }
+
+ if (driver->intr_lsc)
+ sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC;
+ if (driver->intr_rmv)
+ sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register)
+void
+sxe2_class_driver_register(struct sxe2_class_driver *driver)
+{
+ sxe2_common_driver_on_register_pci(driver);
+ TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next);
+}
+
+static void sxe2_common_pci_init(void)
+{
+ const struct rte_pci_id empty_table[] = {
+ {
+ .vendor_id = 0
+ },
+ };
+ int32_t ret = -1;
+
+ if (sxe2_common_pci_id_table == NULL) {
+ ret = sxe2_common_pci_id_table_update(empty_table);
+ if (ret != 0)
+ goto l_end;
+ }
+ rte_pci_register(&sxe2_common_pci_driver);
+
+l_end:
+ return;
+}
+
+static bool sxe2_commoin_inited;
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init)
+void
+sxe2_common_init(void)
+{
+ if (sxe2_commoin_inited)
+ goto l_end;
+
+ pthread_mutex_init(&sxe2_common_devices_list_lock, NULL);
+ sxe2_common_pci_init();
+ sxe2_commoin_inited = true;
+
+l_end:
+ return;
+}
+
+RTE_FINI(sxe2_common_pci_finish)
+{
+ if (sxe2_common_pci_id_table != NULL) {
+ rte_pci_unregister(&sxe2_common_pci_driver);
+ free(sxe2_common_pci_id_table);
+ }
+}
+
+RTE_PMD_EXPORT_NAME(sxe2_common_pci);
+
+RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE);
diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h
new file mode 100644
index 0000000000..5fe218db99
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_common.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_COMMON_H__
+#define __SXE2_COMMON_H__
+
+#include <rte_bitops.h>
+#include <rte_kvargs.h>
+#include <rte_compat.h>
+#include <rte_memory.h>
+#include <rte_ticketlock.h>
+#include <pthread.h>
+
+#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci"
+
+#define SXE2_CDEV_TO_CMD_FD(cdev) \
+ ((cdev)->config.cmd_fd)
+
+#define SXE2_DEVARGS_KEY_CLASS "class"
+
+struct sxe2_class_driver;
+
+enum sxe2_class_type {
+ SXE2_CLASS_TYPE_ETH = 0,
+ SXE2_CLASS_TYPE_VDPA,
+ SXE2_CLASS_TYPE_INVALID,
+};
+
+struct sxe2_common_dev_config {
+ int32_t cmd_fd;
+ bool support_iommu;
+ bool kernel_reset;
+ pthread_mutex_t lock;
+};
+
+struct sxe2_common_device {
+ struct rte_device *dev;
+ TAILQ_ENTRY(sxe2_common_device) next;
+ struct sxe2_class_driver *cdrv;
+ enum sxe2_class_type class_type;
+ struct sxe2_common_dev_config config;
+ struct sxe2_dev_kvargs_info *kvargs;
+};
+
+struct sxe2_dev_kvargs_info {
+ struct rte_kvargs *kvlist;
+ bool is_used[RTE_KVARGS_MAX];
+};
+
+typedef int32_t (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev,
+ struct sxe2_dev_kvargs_info *kvargs);
+
+typedef int32_t (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev);
+
+struct sxe2_class_driver {
+ TAILQ_ENTRY(sxe2_class_driver) next;
+ enum sxe2_class_type drv_class;
+ const char *name;
+ sxe2_class_driver_probe_t *probe;
+ sxe2_class_driver_remove_t *remove;
+ const struct rte_pci_id *id_table;
+ uint32_t intr_lsc;
+ uint32_t intr_rmv;
+};
+
+__rte_internal
+void
+sxe2_common_mem_event_cb(enum rte_mem_event type,
+ const void *addr, size_t size, void *arg __rte_unused);
+
+__rte_internal
+void
+sxe2_class_driver_register(struct sxe2_class_driver *driver);
+
+__rte_internal
+void
+sxe2_common_init(void);
+
+__rte_internal
+int32_t
+sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info,
+ const char *const key_match, arg_handler_t handler, void *opaque_arg);
+
+#endif /* __SXE2_COMMON_H__ */
diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h
index 84775de605..e7a9a5f8d0 100644
--- a/drivers/common/sxe2/sxe2_common_log.h
+++ b/drivers/common/sxe2/sxe2_common_log.h
@@ -79,4 +79,3 @@ extern int32_t sxe2_log_hw;
#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>")
#endif /* __SXE2_COMMON_LOG_H__ */
-
diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h
index 984ea6214c..9116be0ba0 100644
--- a/drivers/common/sxe2/sxe2_host_regs.h
+++ b/drivers/common/sxe2/sxe2_host_regs.h
@@ -15,7 +15,7 @@
#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004
#define SXE2_RXQ_CTRL_ENABLED 0x00000001
-#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3)
+#define SXE2_RXQ_CTRL_CDE_ENABLE RTE_BIT32(3)
#define SXE2_PCIEPROC_BASE 0x002d6000
@@ -25,78 +25,78 @@
#define SXE2_PF_INT_ALLOC_LAST_S 12
#define SXE2_PF_INT_ALLOC_LAST \
(0x7FF << SXE2_PF_INT_ALLOC_LAST_S)
-#define SXE2_PF_INT_ALLOC_VALID BIT(31)
+#define SXE2_PF_INT_ALLOC_VALID RTE_BIT32(31)
#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040)
-#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0)
-#define SXE2_PF_INT_OICR_UR BIT(1)
-#define SXE2_PF_INT_OICR_CA BIT(2)
-#define SXE2_PF_INT_OICR_VFLR BIT(3)
-#define SXE2_PF_INT_OICR_VFR_DONE BIT(4)
-#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5)
-#define SXE2_PF_INT_OICR_BFDE BIT(6)
-#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7)
-#define SXE2_PF_INT_OICR_ECC_ERR BIT(8)
-#define SXE2_PF_INT_OICR_GPIO BIT(9)
-#define SXE2_PF_INT_OICR_TSYN_TX BIT(11)
-#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12)
-#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13)
-#define SXE2_PF_INT_OICR_EXHAUST BIT(14)
-#define SXE2_PF_INT_OICR_FW BIT(15)
-#define SXE2_PF_INT_OICR_SWINT BIT(16)
-#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17)
-#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18)
-#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19)
-#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20)
-#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21)
-#define SXE2_PF_INT_OICR_GRST BIT(22)
-#define SXE2_PF_INT_OICR_FWQ_INT BIT(29)
-#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30)
-#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31)
+#define SXE2_PF_INT_OICR_PCIE_TIMEOUT RTE_BIT32(0)
+#define SXE2_PF_INT_OICR_UR RTE_BIT32(1)
+#define SXE2_PF_INT_OICR_CA RTE_BIT32(2)
+#define SXE2_PF_INT_OICR_VFLR RTE_BIT32(3)
+#define SXE2_PF_INT_OICR_VFR_DONE RTE_BIT32(4)
+#define SXE2_PF_INT_OICR_LAN_TX_ERR RTE_BIT32(5)
+#define SXE2_PF_INT_OICR_BFDE RTE_BIT32(6)
+#define SXE2_PF_INT_OICR_LAN_RX_ERR RTE_BIT32(7)
+#define SXE2_PF_INT_OICR_ECC_ERR RTE_BIT32(8)
+#define SXE2_PF_INT_OICR_GPIO RTE_BIT32(9)
+#define SXE2_PF_INT_OICR_TSYN_TX RTE_BIT32(11)
+#define SXE2_PF_INT_OICR_TSYN_EVENT RTE_BIT32(12)
+#define SXE2_PF_INT_OICR_TSYN_TGT RTE_BIT32(13)
+#define SXE2_PF_INT_OICR_EXHAUST RTE_BIT32(14)
+#define SXE2_PF_INT_OICR_FW RTE_BIT32(15)
+#define SXE2_PF_INT_OICR_SWINT RTE_BIT32(16)
+#define SXE2_PF_INT_OICR_LINKSEC_CHG RTE_BIT32(17)
+#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR RTE_BIT32(18)
+#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR RTE_BIT32(19)
+#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE RTE_BIT32(20)
+#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT RTE_BIT32(21)
+#define SXE2_PF_INT_OICR_GRST RTE_BIT32(22)
+#define SXE2_PF_INT_OICR_FWQ_INT RTE_BIT32(29)
+#define SXE2_PF_INT_OICR_FWQ_TOOL_INT RTE_BIT32(30)
+#define SXE2_PF_INT_OICR_MBXQ_INT RTE_BIT32(31)
#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020)
#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100)
-#define SXE2_PF_INT_FW_ABNORMAL BIT(0)
-#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1)
-#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18)
-#define SXE2_PF_INT_VFLR_DONE BIT(2)
+#define SXE2_PF_INT_FW_ABNORMAL RTE_BIT32(0)
+#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW RTE_BIT32(1)
+#define SXE2_PF_INT_CGMAC_LINK_CHG RTE_BIT32(18)
+#define SXE2_PF_INT_VFLR_DONE RTE_BIT32(2)
#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060)
#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF
#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11
#define SXE2_PF_INT_OICR_CTL_ITR_IDX \
(0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S)
-#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30)
+#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE RTE_BIT32(30)
#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0)
#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF
#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11
#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \
(0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S)
-#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30)
+#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE RTE_BIT32(30)
#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0)
#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF
#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11
#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S)
-#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30)
+#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE RTE_BIT32(30)
#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100)
-#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x)
+#define SXE2_PF_INT_GPIO_X_ENA(x) RTE_BIT32(x)
#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120)
#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7
#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2)
-#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4)
+#define SXE2_PFG_INT_CTL_CREDIT_GRAN RTE_BIT32(4)
#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4)
#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8)
#define SXE2_VFG_RAM_INIT_DONE \
(SXE2_PF_INT_BASE + 0x0128)
-#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0)
-#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1)
-#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2)
+#define SXE2_VFG_RAM_INIT_DONE_0 RTE_BIT32(0)
+#define SXE2_VFG_RAM_INIT_DONE_1 RTE_BIT32(1)
+#define SXE2_VFG_RAM_INIT_DONE_2 RTE_BIT32(2)
#define SXE2_LINK_REG_GET_10G_VALUE 4
#define SXE2_LINK_REG_GET_25G_VALUE 1
@@ -129,7 +129,7 @@
#define SXE2_PFVP_INT_ALLOC_LAST_S 12
#define SXE2_PFVP_INT_ALLOC_LAST_M \
(0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S)
-#define SXE2_PFVP_INT_ALLOC_VALID BIT(31)
+#define SXE2_PFVP_INT_ALLOC_VALID RTE_BIT32(31)
#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4))
#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0
@@ -139,7 +139,7 @@
#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \
(0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S)
-#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31)
+#define SXE2_PCI_PFVP_INT_ALLOC_VALID RTE_BIT32(31)
#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4))
#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0
@@ -147,37 +147,37 @@
#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12
#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S)
#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16
-#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16)
+#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M RTE_BIT32(16)
#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4))
#define SXE2_VSI_PF_ID_S 0
#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S)
-#define SXE2_VSI_PF_EN_M BIT(3)
+#define SXE2_VSI_PF_EN_M RTE_BIT32(3)
#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4))
#define SXE2_MBX_CTL_MSIX_INDX_S 0
#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S)
-#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30)
+#define SXE2_MBX_CTL_CAUSE_ENA_M RTE_BIT32(30)
#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx))
#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF
#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11
#define SXE2_PF_INT_TQCTL_ITR_IDX \
(0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S)
-#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30)
+#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE RTE_BIT32(30)
#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx))
#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF
#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11
#define SXE2_PF_INT_RQCTL_ITR_IDX \
(0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S)
-#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30)
+#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE RTE_BIT32(30)
#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx))
#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F)
#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \
(0x3F)
-#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6))
+#define SXE2_PF_INT_RATE_INTRL_ENABLE (RTE_BIT32(6))
#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7)
#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \
(0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT)
@@ -187,20 +187,20 @@
#define SXE2_VF_INT_ITR_INTERVAL 0xFFF
#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx))
-#define SXE2_VF_DYN_CTL_INTENABLE BIT(0)
-#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1)
-#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2)
+#define SXE2_VF_DYN_CTL_INTENABLE RTE_BIT32(0)
+#define SXE2_VF_DYN_CTL_CLEARPBA RTE_BIT32(1)
+#define SXE2_VF_DYN_CTL_SWINT_TRIG RTE_BIT32(2)
#define SXE2_VF_DYN_CTL_ITR_IDX_S \
3
#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3
#define SXE2_VF_DYN_CTL_INTERVAL_S 5
#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF
-#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24)
+#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE RTE_BIT32(24)
#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25
#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3
#define SXE2_VF_DYN_CTL_INTENABLE_MSK \
- BIT(31)
+ RTE_BIT32(31)
#define SXE2_BAR4_MSIX_BASE 0
#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10))
@@ -225,8 +225,8 @@
#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100))
#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800
-#define SXE2_TXQ_CTRL_SW_EN_M BIT(0)
-#define SXE2_TXQ_CTRL_HW_EN_M BIT(1)
+#define SXE2_TXQ_CTRL_SW_EN_M RTE_BIT32(0)
+#define SXE2_TXQ_CTRL_HW_EN_M RTE_BIT32(1)
#define SXE2_TXQ_CTXT2_PROT_IDX_S 0
#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0)
@@ -239,37 +239,37 @@
#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23
#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23)
#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25
-#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25)
+#define SXE2_TXQ_CTXT2_TSYN_ENA_M RTE_BIT32(25)
#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26
-#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26)
+#define SXE2_TXQ_CTXT2_ALT_VLAN_M RTE_BIT32(26)
#define SXE2_TXQ_CTXT2_WB_MODE_S 27
-#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27)
+#define SXE2_TXQ_CTXT2_WB_MODE_M RTE_BIT32(27)
#define SXE2_TXQ_CTXT2_ITR_WB_S 28
-#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28)
+#define SXE2_TXQ_CTXT2_ITR_WB_M RTE_BIT32(28)
#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29
-#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29)
+#define SXE2_TXQ_CTXT2_LEGACY_EN_M RTE_BIT32(29)
#define SXE2_TXQ_CTXT2_SSO_EN_S 30
-#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30)
+#define SXE2_TXQ_CTXT2_SSO_EN_M RTE_BIT32(30)
#define SXE2_TXQ_CTXT3_SRC_VSI_S 0
#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0)
#define SXE2_TXQ_CTXT3_CPU_ID_S 12
#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12)
#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20
-#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20)
+#define SXE2_TXQ_CTXT3_TPH_RDDESC_M RTE_BIT32(20)
#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21
-#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21)
+#define SXE2_TXQ_CTXT3_TPH_RDDATA_M RTE_BIT32(21)
#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22
-#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22)
+#define SXE2_TXQ_CTXT3_TPH_WRDESC_M RTE_BIT32(22)
#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0
#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0)
#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13
-#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13)
+#define SXE2_TXQ_CTXT3_RDDESC_RO_M RTE_BIT32(13)
#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14
-#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14)
+#define SXE2_TXQ_CTXT3_WRDESC_RO_M RTE_BIT32(14)
#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15
-#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15)
+#define SXE2_TXQ_CTXT3_RDDATA_RO_M RTE_BIT32(15)
#define SXE2_TXQ_CTXT3_QLEN_S 16
#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16)
@@ -400,16 +400,16 @@ enum {
#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580)
#define SXE2_CMD_REG_LEN_M 0x3FF
-#define SXE2_CMD_REG_LEN_VFE_M BIT(28)
-#define SXE2_CMD_REG_LEN_OVFL_M BIT(29)
-#define SXE2_CMD_REG_LEN_CRIT_M BIT(30)
-#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31)
+#define SXE2_CMD_REG_LEN_VFE_M RTE_BIT32(28)
+#define SXE2_CMD_REG_LEN_OVFL_M RTE_BIT32(29)
+#define SXE2_CMD_REG_LEN_CRIT_M RTE_BIT32(30)
+#define SXE2_CMD_REG_LEN_ENABLE_M RTE_BIT32(31)
#define SXE2_CMD_REG_HEAD_M 0x3FF
#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500)
-#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0)
-#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1)
+#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK RTE_BIT32(0)
+#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK RTE_BIT32(1)
#define SXE2_TOP_CFG_BASE 0x00292000
#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c)
@@ -465,26 +465,26 @@ enum {
#define SXE2_L2TAG_ID_VLAN 3
#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF
-#define SXE2_PFP_L2TAGSEN_DVM BIT(10)
+#define SXE2_PFP_L2TAGSEN_DVM RTE_BIT32(10)
#define SXE2_VSI_TSR_STRIP_TAG_S 0
#define SXE2_VSI_TSR_SHOW_TAG_S 4
-#define SXE2_VSI_TSR_ID_STAG BIT(0)
-#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1)
-#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2)
-#define SXE2_VSI_TSR_ID_VLAN BIT(3)
+#define SXE2_VSI_TSR_ID_STAG RTE_BIT32(0)
+#define SXE2_VSI_TSR_ID_OUT_VLAN1 RTE_BIT32(1)
+#define SXE2_VSI_TSR_ID_OUT_VLAN2 RTE_BIT32(2)
+#define SXE2_VSI_TSR_ID_VLAN RTE_BIT32(3)
#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0
#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7
-#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3)
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID RTE_BIT32(3)
#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4
#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7
-#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7)
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID RTE_BIT32(7)
#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16
-#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19)
+#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID RTE_BIT32(19)
#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20
-#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23)
+#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID RTE_BIT32(23)
#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0
#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2
@@ -498,43 +498,43 @@ enum {
#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4))
#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4))
-#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9)
+#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE RTE_BIT32(9)
-#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1)
-#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2)
-#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3)
-#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9)
+#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN RTE_BIT32(1)
+#define SXE2_VSI_TX_SW_CTRL_LAN_EN RTE_BIT32(2)
+#define SXE2_VSI_TX_SW_CTRL_MACAS_EN RTE_BIT32(3)
+#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE RTE_BIT32(9)
#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16)
#define SXE2_PCIE_SYS_READY 0x38c
-#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0)
-#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2)
-#define SXE2_PCIE_SYS_READY_R5 BIT(3)
-#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16)
+#define SXE2_PCIE_SYS_READY_CORER_ASSERT RTE_BIT32(0)
+#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE RTE_BIT32(2)
+#define SXE2_PCIE_SYS_READY_R5 RTE_BIT32(3)
+#define SXE2_PCIE_SYS_READY_STOP_DROP RTE_BIT32(16)
#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78
-#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21)
+#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING RTE_BIT32(21)
#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630)
#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586
#define SXE2_PFGEN_CTRL (0x00336000)
-#define SXE2_PFGEN_CTRL_PFSWR BIT(0)
+#define SXE2_PFGEN_CTRL_PFSWR RTE_BIT32(0)
#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4))
-#define SXE2_VFGEN_CTRL_VFSWR BIT(0)
+#define SXE2_VFGEN_CTRL_VFSWR RTE_BIT32(0)
#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4)
#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3)
#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0)
-#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0))
-#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1))
-#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (RTE_BIT32(0))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (RTE_BIT32(1))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (RTE_BIT32(2))
#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300)
#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0)
#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1)
-#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (RTE_BIT32(10))
#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4))
@@ -548,7 +548,7 @@ enum {
#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S)
#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4))
-#define SXE2_VF_RXQ_MAPENA_M BIT(0)
+#define SXE2_VF_RXQ_MAPENA_M RTE_BIT32(0)
#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4))
#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0
@@ -557,12 +557,12 @@ enum {
#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S)
#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4))
-#define SXE2_VF_TXQ_MAPENA_M BIT(0)
+#define SXE2_VF_TXQ_MAPENA_M RTE_BIT32(0)
#define PRI_PTP_BASEADDR 0x2a8000
#define GLTSYN (PRI_PTP_BASEADDR + 0x0)
-#define GLTSYN_ENA_M BIT(0)
+#define GLTSYN_ENA_M RTE_BIT32(0)
#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4)
#define GLTSYN_CMD_INIT_TIME 0x01
@@ -578,12 +578,12 @@ enum {
#define GLTSYN_SYNC_GEN_PULSE 0x4
#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC)
-#define GLTSYN_SEM_BUSY_M BIT(0)
+#define GLTSYN_SEM_BUSY_M RTE_BIT32(0)
#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10)
-#define GLTSYN_STAT_EVENT0_M BIT(0)
-#define GLTSYN_STAT_EVENT1_M BIT(1)
-#define GLTSYN_STAT_EVENT2_M BIT(2)
+#define GLTSYN_STAT_EVENT0_M RTE_BIT32(0)
+#define GLTSYN_STAT_EVENT1_M RTE_BIT32(1)
+#define GLTSYN_STAT_EVENT2_M RTE_BIT32(2)
#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20)
#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24)
@@ -616,19 +616,19 @@ enum {
#define GLTSYN_AUXOUT(_i) \
(PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4))
-#define GLTSYN_AUXOUT_OUT_ENA BIT(0)
+#define GLTSYN_AUXOUT_OUT_ENA RTE_BIT32(0)
#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1)
-#define GLTSYN_AUXOUT_OUTLVL BIT(3)
-#define GLTSYN_AUXOUT_INT_ENA BIT(4)
+#define GLTSYN_AUXOUT_OUTLVL RTE_BIT32(3)
+#define GLTSYN_AUXOUT_INT_ENA RTE_BIT32(4)
#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3)
#define GLTSYN_CLKO(_i) \
(PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4))
#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4))
-#define GLTSYN_AUXIN_RISING_EDGE BIT(0)
-#define GLTSYN_AUXIN_FALLING_EDGE BIT(1)
-#define GLTSYN_AUXIN_ENABLE BIT(4)
+#define GLTSYN_AUXIN_RISING_EDGE RTE_BIT32(0)
+#define GLTSYN_AUXIN_FALLING_EDGE RTE_BIT32(1)
+#define GLTSYN_AUXIN_ENABLE RTE_BIT32(4)
#define CGMAC_CSR_BASE 0x2B4000
@@ -654,13 +654,13 @@ enum {
#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0)
#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11
#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11)
-#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30)
+#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M RTE_BIT32(30)
#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4))
#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0)
#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11
#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11)
-#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30)
+#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M RTE_BIT32(30)
#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4))
#define SXE2_IPSEC_TX_BASE (0x2A0000)
@@ -704,4 +704,4 @@ enum {
#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
(0x0b70 + 8 * (pri)))
-#endif
+#endif /* __SXE2_HOST_REGS_H__ */
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c
new file mode 100644
index 0000000000..4c2bc452ff
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <rte_version.h>
+#include <eal_export.h>
+
+#include "sxe2_osal.h"
+#include "sxe2_common_log.h"
+#include "sxe2_ioctl_chnl.h"
+#include "sxe2_ioctl_chnl_func.h"
+
+#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-"
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close)
+void
+sxe2_drv_cmd_close(struct sxe2_common_device *cdev)
+{
+ cdev->config.kernel_reset = true;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec)
+int32_t
+sxe2_drv_cmd_exec(struct sxe2_common_device *cdev,
+ struct sxe2_drv_cmd_params *cmd_params)
+{
+ int32_t cmd_fd;
+ int32_t ret = -EIO;
+
+ if (cdev->config.kernel_reset) {
+ ret = -EPERM;
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_end;
+ }
+
+ cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev);
+ if (cmd_fd < 0) {
+ ret = -EBADF;
+ PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd);
+ goto l_end;
+ }
+
+ PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]"
+ "opcode[0x%x] req_len[%d] resp_len[%d]",
+ cmd_fd, cmd_params->trace_id, cmd_params->opcode,
+ cmd_params->req_len, cmd_params->resp_len);
+
+ (void)pthread_mutex_lock(&cdev->config.lock);
+ ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s",
+ cmd_fd, cmd_params->opcode, ret, strerror(errno));
+ ret = -errno;
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+ goto l_end;
+ }
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+
+l_end:
+ return ret;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open)
+int32_t
+sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev)
+{
+ int32_t ret = 0;
+ int32_t fd = 0;
+ char drv_name[32] = {0};
+
+ snprintf(drv_name, sizeof(drv_name),
+ "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8,
+ SXE2_CHR_DEV_NAME,
+ pci_dev->addr.domain,
+ pci_dev->addr.bus,
+ pci_dev->addr.devid,
+ pci_dev->addr.function);
+
+ fd = open(drv_name, O_RDWR);
+ if (fd < 0) {
+ ret = -EBADF;
+ PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s",
+ drv_name, ret, strerror(errno));
+ goto l_end;
+ }
+
+ SXE2_CDEV_TO_CMD_FD(cdev) = fd;
+
+ PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d",
+ drv_name, SXE2_CDEV_TO_CMD_FD(cdev));
+
+l_end:
+ return ret;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close)
+void
+sxe2_drv_dev_close(struct sxe2_common_device *cdev)
+{
+ int32_t fd = SXE2_CDEV_TO_CMD_FD(cdev);
+
+ if (fd >= 0)
+ close(fd);
+ PMD_LOG_INFO(COM, "closed device fd=%d", fd);
+ SXE2_CDEV_TO_CMD_FD(cdev) = -1;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark)
+int32_t
+sxe2_drv_dev_handshark(struct sxe2_common_device *cdev)
+{
+ int32_t ret = 0;
+ int32_t cmd_fd = 0;
+ struct sxe2_ioctl_cmd_common_hdr cmd_params;
+
+ if (cdev->config.kernel_reset) {
+ ret = -EPERM;
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_end;
+ }
+
+ cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev);
+ if (cmd_fd < 0) {
+ ret = -EBADF;
+ PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd);
+ goto l_end;
+ }
+
+ PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd);
+
+ memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr));
+ cmd_params.dpdk_ver = SXE2_COM_VER;
+ cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr);
+
+ (void)pthread_mutex_lock(&cdev->config.lock);
+ ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s",
+ cmd_fd, ret, strerror(errno));
+ ret = -EIO;
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+ goto l_end;
+ }
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+
+ if (cmd_params.cap & RTE_BIT32(SXE2_COM_CAP_IOMMU_MAP))
+ cdev->config.support_iommu = true;
+ else
+ cdev->config.support_iommu = false;
+
+l_end:
+ return ret;
+}
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h
new file mode 100644
index 0000000000..2560349e70
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_IOCTL_CHNL_H__
+#define __SXE2_IOCTL_CHNL_H__
+
+#include <rte_version.h>
+#include <bus_pci_driver.h>
+
+#include "sxe2_internal_ver.h"
+
+#define SXE2_COM_INVAL_uint32_t 0xFFFFFFFF
+
+#define SXE2_COM_PCI_OFFSET_SHIFT 40
+
+#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((uint64_t)(index) << SXE2_COM_PCI_OFFSET_SHIFT)
+#define SXE2_COM_PCI_OFFSET_MASK (((uint64_t)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1)
+#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((uint64_t)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \
+ (((uint64_t)(off)) & SXE2_COM_PCI_OFFSET_MASK))
+
+#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU
+
+#define SXE2_DRV_CMD_DFLT_TIMEOUT (30)
+
+#define SXE2_COM_VER_MAJOR 1
+#define SXE2_COM_VER_MINOR 0
+#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR)
+
+enum SXE2_COM_CMD {
+ SXE2_DEVICE_HANDSHAKE = 1,
+ SXE2_DEVICE_IO_IRQS_REQ,
+ SXE2_DEVICE_EVT_IRQ_REQ,
+ SXE2_DEVICE_RST_IRQ_REQ,
+ SXE2_DEVICE_EVT_CAUSE_GET,
+ SXE2_DEVICE_DMA_MAP,
+ SXE2_DEVICE_DMA_UNMAP,
+ SXE2_DEVICE_PASSTHROUGH,
+ SXE2_DEVICE_MAX,
+};
+
+#define SXE2_CMD_TYPE 'S'
+
+#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE)
+#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ)
+#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ)
+#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ)
+#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET)
+#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP)
+#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP)
+#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH)
+
+enum sxe2_com_cap {
+ SXE2_COM_CAP_IOMMU_MAP = 0,
+};
+
+struct sxe2_ioctl_cmd_common_hdr {
+ uint32_t dpdk_ver;
+ uint32_t drv_ver;
+ uint32_t msg_len;
+ uint32_t cap;
+ uint8_t reserved[32];
+};
+
+struct sxe2_drv_cmd_params {
+ uint64_t trace_id;
+ uint32_t timeout;
+ uint32_t opcode;
+ uint16_t vsi_id;
+ uint16_t repr_id;
+ uint32_t req_len;
+ uint32_t resp_len;
+ void *req_data;
+ void *resp_data;
+ uint8_t resv[32];
+};
+
+struct sxe2_ioctl_irq_set {
+ uint32_t cnt;
+ uint8_t resv[4];
+ uint32_t base_irq_in_com;
+ int32_t *event_fd;
+};
+
+enum sxe2_com_event_cause {
+ SXE2_COM_EC_LINK_CHG = 0,
+ SXE2_COM_SW_MODE_LEGACY,
+ SXE2_COM_SW_MODE_SWITCHDEV,
+ SXE2_COM_FC_ST_CHANGE,
+
+ SXE2_COM_EC_RESET = 62,
+ SXE2_COM_EC_MAX = 63,
+};
+
+struct sxe2_ioctl_other_evt_set {
+ int32_t eventfd;
+ uint8_t resv[4];
+ uint64_t filter_table;
+};
+
+struct sxe2_ioctl_other_evt_get {
+ uint64_t evt_cause;
+ uint8_t resv[8];
+};
+
+struct sxe2_ioctl_reset_sub_set {
+ int32_t eventfd;
+ uint8_t resv[4];
+};
+
+struct sxe2_ioctl_iommu_dma_map {
+ uint64_t vaddr;
+ uint64_t iova;
+ uint64_t size;
+ uint8_t resv[4];
+};
+
+struct sxe2_ioctl_iommu_dma_unmap {
+ uint64_t iova;
+};
+
+union sxe2_drv_trace_info {
+ uint64_t id;
+ struct {
+ uint64_t count : 54;
+ uint64_t cpu_id : 10;
+ } sxe2_drv_trace_id_param;
+};
+
+#endif /* __SXE2_IOCTL_CHNL_H__ */
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
new file mode 100644
index 0000000000..7c9ad765e8
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_IOCTL_CHNL_FUNC_H__
+#define __SXE2_IOCTL_CHNL_FUNC_H__
+
+#include <rte_version.h>
+#include <bus_pci_driver.h>
+
+#include "sxe2_common.h"
+#include "sxe2_ioctl_chnl.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+__rte_internal
+void
+sxe2_drv_cmd_close(struct sxe2_common_device *cdev);
+
+__rte_internal
+int32_t
+sxe2_drv_cmd_exec(struct sxe2_common_device *cdev,
+ struct sxe2_drv_cmd_params *cmd_params);
+
+__rte_internal
+int32_t
+sxe2_drv_dev_open(struct sxe2_common_device *cdev,
+ struct rte_pci_device *pci_dev);
+
+__rte_internal
+void
+sxe2_drv_dev_close(struct sxe2_common_device *cdev);
+
+__rte_internal
+int32_t
+sxe2_drv_dev_handshark(struct sxe2_common_device *cdev);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h
index 3bea8fbf85..1a01054479 100644
--- a/drivers/common/sxe2/sxe2_osal.h
+++ b/drivers/common/sxe2/sxe2_osal.h
@@ -63,7 +63,7 @@ enum sxe2_itr_idx {
#define ETH_P_QINQ1 0x9100
#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long))
-#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32)
+#define BITS_TO_uint32_t(nr) DIV_ROUND_UP(nr, 32)
#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1)))
diff --git a/drivers/meson.build b/drivers/meson.build
index 6ae102e943..d4ae512bae 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -12,6 +12,7 @@ subdirs = [
'common/qat', # depends on bus.
'common/sfc_efx', # depends on bus.
'common/zsda', # depends on bus.
+ 'common/sxe2', # depends on bus.
'mempool', # depends on common and bus.
'dma', # depends on common and bus.
'net', # depends on common, bus, mempool
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 05/11] drivers: add base driver probe skeleton
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (3 preceding siblings ...)
2026-05-16 2:55 ` [PATCH v14 04/11] drivers: add base driver skeleton liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 06/11] drivers: support PCI BAR mapping liujie5
` (5 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Initialize the eth_dev_ops for the sxe2 PMD. This includes the
implementation of mandatory ethdev operations such as dev_configure,
dev_start, dev_stop, and dev_infos_get.
Set up the basic infrastructure for device initialization to allow
the driver to be recognized as a valid ethernet device within the
DPDK framework.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/sxe2_common.c | 2 +-
drivers/common/sxe2/sxe2_ioctl_chnl.c | 33 +-
drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 13 +-
drivers/common/sxe2/sxe2_osal.h | 7 +-
drivers/net/meson.build | 1 +
drivers/net/sxe2/meson.build | 23 +
drivers/net/sxe2/sxe2_cmd_chnl.c | 323 +++++++++++
drivers/net/sxe2/sxe2_cmd_chnl.h | 37 ++
drivers/net/sxe2/sxe2_drv_cmd.h | 388 +++++++++++++
drivers/net/sxe2/sxe2_ethdev.c | 613 +++++++++++++++++++++
drivers/net/sxe2/sxe2_ethdev.h | 293 ++++++++++
drivers/net/sxe2/sxe2_irq.h | 48 ++
drivers/net/sxe2/sxe2_queue.c | 38 ++
drivers/net/sxe2/sxe2_queue.h | 191 +++++++
drivers/net/sxe2/sxe2_txrx_common.h | 540 ++++++++++++++++++
drivers/net/sxe2/sxe2_txrx_poll.h | 16 +
drivers/net/sxe2/sxe2_vsi.c | 214 +++++++
drivers/net/sxe2/sxe2_vsi.h | 204 +++++++
18 files changed, 2975 insertions(+), 9 deletions(-)
create mode 100644 drivers/net/sxe2/meson.build
create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c
create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h
create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h
create mode 100644 drivers/net/sxe2/sxe2_ethdev.c
create mode 100644 drivers/net/sxe2/sxe2_ethdev.h
create mode 100644 drivers/net/sxe2/sxe2_irq.h
create mode 100644 drivers/net/sxe2/sxe2_queue.c
create mode 100644 drivers/net/sxe2/sxe2_queue.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h
create mode 100644 drivers/net/sxe2/sxe2_vsi.c
create mode 100644 drivers/net/sxe2/sxe2_vsi.h
diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c
index 27c33b1186..5cf43dd3b7 100644
--- a/drivers/common/sxe2/sxe2_common.c
+++ b/drivers/common/sxe2/sxe2_common.c
@@ -183,7 +183,7 @@ static int32_t sxe2_common_device_setup(struct sxe2_common_device *cdev)
goto l_end;
}
- ret = sxe2_drv_dev_handshark(cdev);
+ ret = sxe2_drv_dev_handshke(cdev);
if (ret != 0) {
PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret);
goto l_close_dev;
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c
index 4c2bc452ff..11e24d04d9 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl.c
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c
@@ -112,9 +112,9 @@ sxe2_drv_dev_close(struct sxe2_common_device *cdev)
SXE2_CDEV_TO_CMD_FD(cdev) = -1;
}
-RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark)
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshke)
int32_t
-sxe2_drv_dev_handshark(struct sxe2_common_device *cdev)
+sxe2_drv_dev_handshke(struct sxe2_common_device *cdev)
{
int32_t ret = 0;
int32_t cmd_fd = 0;
@@ -144,7 +144,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev)
if (ret < 0) {
PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s",
cmd_fd, ret, strerror(errno));
- ret = -EIO;
+ ret = -errno;
(void)pthread_mutex_unlock(&cdev->config.lock);
goto l_end;
}
@@ -158,3 +158,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev)
l_end:
return ret;
}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap)
+int32_t
+sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len)
+{
+ int32_t ret = 0;
+
+ if (cdev->config.kernel_reset) {
+ ret = -EPERM;
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_end;
+ }
+
+ PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx",
+ virt, len);
+
+ ret = munmap(virt, len);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s",
+ virt, len, strerror(errno));
+ ret = -errno;
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
index 7c9ad765e8..710ca1a8d0 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
@@ -35,10 +35,19 @@ sxe2_drv_dev_close(struct sxe2_common_device *cdev);
__rte_internal
int32_t
-sxe2_drv_dev_handshark(struct sxe2_common_device *cdev);
+sxe2_drv_dev_handshke(struct sxe2_common_device *cdev);
+
+__rte_internal
+void
+*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, uint8_t bar_idx,
+ uint64_t len, uint64_t offset);
+
+__rte_internal
+int32_t
+sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len);
#ifdef __cplusplus
}
#endif
-#endif
+#endif /* __SXE2_IOCTL_CHNL_FUNC_H__ */
diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h
index 1a01054479..c3a395750a 100644
--- a/drivers/common/sxe2/sxe2_osal.h
+++ b/drivers/common/sxe2/sxe2_osal.h
@@ -58,9 +58,10 @@ enum sxe2_itr_idx {
SXE2_ITR_IDX_NONE,
};
-#define ETH_P_8021Q 0x8100
-#define ETH_P_8021AD 0x88a8
-#define ETH_P_QINQ1 0x9100
+#define ETH_P_8021Q 0x8100
+#define ETH_P_8021AD 0x88a8
+#define ETH_P_QINQ1 0x9100
+#define ETH_ALEN 6
#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long))
#define BITS_TO_uint32_t(nr) DIV_ROUND_UP(nr, 32)
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index c7dae4ad27..4e8ccb945f 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -58,6 +58,7 @@ drivers = [
'rnp',
'sfc',
'softnic',
+ 'sxe2',
'tap',
'thunderx',
'txgbe',
diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build
new file mode 100644
index 0000000000..00c38b147c
--- /dev/null
+++ b/drivers/net/sxe2/meson.build
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+
+if is_windows
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+cflags += ['-g']
+
+deps += ['common_sxe2', 'hash','cryptodev','security']
+
+includes += include_directories('../../common/sxe2')
+
+sources += files(
+ 'sxe2_ethdev.c',
+ 'sxe2_cmd_chnl.c',
+ 'sxe2_vsi.c',
+ 'sxe2_queue.c',
+)
+
+allow_internal_get_api = true
diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c
new file mode 100644
index 0000000000..d16b6528d0
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_cmd_chnl.c
@@ -0,0 +1,323 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include "sxe2_ioctl_chnl_func.h"
+#include "sxe2_drv_cmd.h"
+#include "sxe2_cmd_chnl.h"
+#include "sxe2_ethdev.h"
+#include "sxe2_common_log.h"
+
+static union sxe2_drv_trace_info sxe2_drv_trace_id;
+
+static void sxe2_drv_trace_id_alloc(uint64_t *trace_id)
+{
+ union sxe2_drv_trace_info *trace = NULL;
+ uint64_t trace_id_count = 0;
+
+ trace = &sxe2_drv_trace_id;
+
+ trace_id_count = trace->sxe2_drv_trace_id_param.count;
+ ++trace_id_count;
+ trace->sxe2_drv_trace_id_param.count =
+ (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK);
+
+ *trace_id = trace->id;
+}
+
+static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter,
+ struct sxe2_drv_cmd_params *cmd, uint32_t opc, const char *opc_str,
+ void *in_data, uint32_t in_len, void *out_data, uint32_t out_len)
+{
+ PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str);
+ cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT;
+ cmd->opcode = opc;
+ cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id;
+ cmd->repr_id = (adapter->repr_priv_data != NULL) ?
+ adapter->repr_priv_data->repr_id : 0xFFFF;
+ cmd->req_len = in_len;
+ cmd->req_data = in_data;
+ cmd->resp_len = out_len;
+ cmd->resp_data = out_data;
+
+ sxe2_drv_trace_id_alloc(&cmd->trace_id);
+}
+
+#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \
+ __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len)
+
+
+int32_t sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS,
+ NULL, 0, dev_caps,
+ sizeof(struct sxe2_drv_dev_caps_resp));
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret);
+
+ return ret;
+}
+
+int32_t sxe2_drv_dev_info_get(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_info_resp *dev_info_resp)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO,
+ NULL, 0, dev_info_resp,
+ sizeof(struct sxe2_drv_dev_info_resp));
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret);
+
+ return ret;
+}
+
+int32_t sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO,
+ NULL, 0, dev_fw_info_resp,
+ sizeof(struct sxe2_drv_dev_fw_info_resp));
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret);
+
+ return ret;
+}
+
+int32_t sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_vsi_create_req_resp vsi_req = {0};
+ struct sxe2_drv_vsi_create_req_resp vsi_resp = {0};
+
+ vsi_req.vsi_id = vsi->vsi_id;
+
+ vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt);
+ vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func;
+ vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt;
+ vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf;
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE,
+ &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp),
+ &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp));
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret) {
+ PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret);
+ goto l_end;
+ }
+
+ vsi->vsi_id = vsi_resp.vsi_id;
+ vsi->vsi_type = vsi_resp.vsi_type;
+
+l_end:
+ return ret;
+}
+
+int32_t sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_vsi_free_req vsi_req = {0};
+
+ vsi_req.vsi_id = vsi->vsi_id;
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE,
+ &vsi_req, sizeof(struct sxe2_drv_vsi_free_req),
+ NULL, 0);
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret);
+
+ return ret;
+}
+
+#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7)
+#define SXE2_RX_HDR_SIZE 256
+
+static int32_t sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq,
+ struct sxe2_drv_rxq_cfg_req *req, uint16_t rxq_cnt)
+{
+ struct sxe2_adapter *adapter = rxq->vsi->adapter;
+ struct sxe2_drv_rxq_ctxt *ctxt = req->cfg;
+ struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data;
+ int32_t ret = 0;
+
+ req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id;
+ req->q_cnt = rxq_cnt;
+ req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD;
+
+ ctxt->queue_id = rxq->queue_id;
+ ctxt->depth = rxq->ring_depth;
+ ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN);
+ ctxt->dma_addr = rxq->base_addr;
+
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
+ ctxt->lro_en = 1;
+ ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size;
+ } else {
+ ctxt->lro_en = 0;
+ }
+
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ ctxt->keep_crc_en = 1;
+ else
+ ctxt->keep_crc_en = 0;
+
+ ctxt->desc_size = sizeof(union sxe2_rx_desc);
+ return ret;
+}
+
+int32_t sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter,
+ struct sxe2_rx_queue *rxq,
+ uint16_t rxq_cnt)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_rxq_cfg_req *req = NULL;
+ uint16_t len = 0;
+
+ len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt);
+ req = rte_zmalloc("sxe2_rxq_cfg", len, 0);
+ if (req == NULL) {
+ PMD_LOG_ERR(RX, "rxq cfg mem alloc failed");
+ ret = -ENOMEM;
+ goto l_end;
+ }
+
+ ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt);
+ if (ret) {
+ PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE,
+ req, len, NULL, 0);
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret);
+
+l_end:
+ if (req)
+ rte_free(req);
+ return ret;
+}
+
+static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq,
+ struct sxe2_drv_txq_cfg_req *req,
+ uint16_t txq_cnt)
+{
+ struct sxe2_drv_txq_ctxt *ctxt = req->cfg;
+ uint16_t q_idx = 0;
+
+ req->vsi_id = txq->vsi->vsi_id;
+ req->q_cnt = txq_cnt;
+
+ for (q_idx = 0; q_idx < txq_cnt; q_idx++) {
+ ctxt = &req->cfg[q_idx];
+ ctxt->depth = txq[q_idx].ring_depth;
+ ctxt->dma_addr = txq[q_idx].base_addr;
+ ctxt->queue_id = txq[q_idx].queue_id;
+ }
+}
+
+int32_t sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter,
+ struct sxe2_tx_queue *txq,
+ uint16_t txq_cnt)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_txq_cfg_req *req;
+ uint16_t len = 0;
+
+ len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt);
+ req = rte_zmalloc("sxe2_txq_cfg", len, 0);
+ if (req == NULL) {
+ PMD_LOG_ERR(TX, "txq cfg mem alloc failed");
+ ret = -ENOMEM;
+ goto l_end;
+ }
+
+ sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt);
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE,
+ req, len, NULL, 0);
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret);
+
+l_end:
+ if (req)
+ rte_free(req);
+ return ret;
+}
+
+int32_t sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_q_switch_req req;
+
+ req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id);
+ req.q_idx = rxq->queue_id;
+
+ req.is_enable = (uint8_t)enable;
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE,
+ &req, sizeof(req), NULL, 0);
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d",
+ enable, ret);
+
+ return ret;
+}
+
+int32_t sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_q_switch_req req;
+
+ req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id);
+ req.q_idx = txq->queue_id;
+
+ req.is_enable = (uint8_t)enable;
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE,
+ &req, sizeof(req), NULL, 0);
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret) {
+ PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d",
+ enable, ret);
+ }
+
+ return ret;
+}
diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h
new file mode 100644
index 0000000000..d0dfef9caf
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_cmd_chnl.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_CMD_CHNL_H__
+#define __SXE2_CMD_CHNL_H__
+
+#include "sxe2_ethdev.h"
+#include "sxe2_drv_cmd.h"
+#include "sxe2_ioctl_chnl_func.h"
+
+int32_t sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_caps_resp *dev_caps);
+
+int32_t sxe2_drv_dev_info_get(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_info_resp *dev_info_resp);
+
+int32_t sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp);
+
+int32_t sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi);
+
+int32_t sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi);
+
+int32_t sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable);
+
+int32_t sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable);
+
+int32_t sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter,
+ struct sxe2_rx_queue *rxq,
+ uint16_t rxq_cnt);
+
+int32_t sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter,
+ struct sxe2_tx_queue *txq,
+ uint16_t txq_cnt);
+
+#endif
diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h
new file mode 100644
index 0000000000..6f0f4d0643
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_drv_cmd.h
@@ -0,0 +1,388 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_DRV_CMD_H__
+#define __SXE2_DRV_CMD_H__
+
+#include "sxe2_osal.h"
+
+#define SXE2_DRV_CMD_MODULE_S (16)
+#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF))
+
+#define SXE2_DEV_CAPS_OFFLOAD_L2 RTE_BIT32(0)
+#define SXE2_DEV_CAPS_OFFLOAD_VLAN RTE_BIT32(1)
+#define SXE2_DEV_CAPS_OFFLOAD_RSS RTE_BIT32(2)
+#define SXE2_DEV_CAPS_OFFLOAD_IPSEC RTE_BIT32(3)
+#define SXE2_DEV_CAPS_OFFLOAD_FNAV RTE_BIT32(4)
+#define SXE2_DEV_CAPS_OFFLOAD_TM RTE_BIT32(5)
+#define SXE2_DEV_CAPS_OFFLOAD_PTP RTE_BIT32(6)
+#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP RTE_BIT32(7)
+#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE RTE_BIT32(8)
+
+#define SXE2_TXQ_STATS_MAP_MAX_NUM 16
+#define SXE2_RXQ_STATS_MAP_MAX_NUM 4
+#define SXE2_RXQ_MAP_Q_MAX_NUM 256
+
+#define SXE2_STAT_MAP_INVALID_QID 0xFFFF
+
+#define SXE2_SCHED_MODE_DEFAULT 0
+#define SXE2_SCHED_MODE_TM 1
+#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2
+#define SXE2_SCHED_MODE_INVALID 3
+
+#define SXE2_SRCVSI_PRUNE_MAX_NUM 2
+
+#define SXE2_PTYPE_UNKNOWN RTE_BIT32(0)
+#define SXE2_PTYPE_L2_ETHER RTE_BIT32(1)
+#define SXE2_PTYPE_L3_IPV4 RTE_BIT32(2)
+#define SXE2_PTYPE_L3_IPV6 RTE_BIT32(4)
+#define SXE2_PTYPE_L4_TCP RTE_BIT32(6)
+#define SXE2_PTYPE_L4_UDP RTE_BIT32(7)
+#define SXE2_PTYPE_L4_SCTP RTE_BIT32(8)
+#define SXE2_PTYPE_INNER_L2_ETHER RTE_BIT32(9)
+#define SXE2_PTYPE_INNER_L3_IPV4 RTE_BIT32(10)
+#define SXE2_PTYPE_INNER_L3_IPV6 RTE_BIT32(12)
+#define SXE2_PTYPE_INNER_L4_TCP RTE_BIT32(14)
+#define SXE2_PTYPE_INNER_L4_UDP RTE_BIT32(15)
+#define SXE2_PTYPE_INNER_L4_SCTP RTE_BIT32(16)
+#define SXE2_PTYPE_TUNNEL_GRENAT RTE_BIT32(17)
+
+#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER)
+#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6)
+#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \
+ SXE2_PTYPE_L4_SCTP)
+#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER)
+#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \
+ SXE2_PTYPE_INNER_L3_IPV6)
+#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \
+ SXE2_PTYPE_INNER_L4_UDP | \
+ SXE2_PTYPE_INNER_L4_SCTP)
+#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT)
+
+enum sxe2_dev_type {
+ SXE2_DEV_T_PF = 0,
+ SXE2_DEV_T_VF,
+ SXE2_DEV_T_PF_BOND,
+ SXE2_DEV_T_MAX,
+};
+
+struct sxe2_drv_queue_caps {
+ uint16_t queues_cnt;
+ uint16_t base_idx_in_pf;
+};
+
+struct sxe2_drv_msix_caps {
+ uint16_t msix_vectors_cnt;
+ uint16_t base_idx_in_func;
+};
+
+struct sxe2_drv_rss_hash_caps {
+ uint16_t hash_key_size;
+ uint16_t lut_key_size;
+};
+
+enum sxe2_vf_vsi_valid {
+ SXE2_VF_VSI_BOTH = 0,
+ SXE2_VF_VSI_ONLY_DPDK,
+ SXE2_VF_VSI_ONLY_KERNEL,
+ SXE2_VF_VSI_MAX,
+};
+
+struct sxe2_drv_vsi_caps {
+ uint16_t func_id;
+ uint16_t dpdk_vsi_id;
+ uint16_t kernel_vsi_id;
+ uint16_t vsi_type;
+};
+
+struct sxe2_drv_representor_caps {
+ uint16_t cnt_repr_vf;
+ uint8_t rsv[2];
+ struct sxe2_drv_vsi_caps repr_vf_id[256];
+};
+
+enum sxe2_phys_port_name_type {
+ SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0,
+ SXE2_PHYS_PORT_NAME_TYPE_LEGACY,
+ SXE2_PHYS_PORT_NAME_TYPE_UPLINK,
+ SXE2_PHYS_PORT_NAME_TYPE_PFVF,
+
+ SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN,
+};
+
+struct sxe2_switchdev_mode_info {
+ uint8_t pf_id;
+ uint8_t is_switchdev;
+ uint8_t rsv[2];
+};
+
+struct sxe2_switchdev_cpvsi_info {
+ uint16_t cp_vsi_id;
+ uint8_t rsv[2];
+};
+
+struct sxe2_txsch_caps {
+ uint8_t layer_cap;
+ uint8_t tm_mid_node_num;
+ uint8_t prio_num;
+ uint8_t rev;
+};
+
+struct sxe2_drv_dev_caps_resp {
+ struct sxe2_drv_queue_caps queue_caps;
+ struct sxe2_drv_msix_caps msix_caps;
+ struct sxe2_drv_rss_hash_caps rss_hash_caps;
+ struct sxe2_drv_vsi_caps vsi_caps;
+ struct sxe2_txsch_caps txsch_caps;
+ struct sxe2_drv_representor_caps repr_caps;
+ uint8_t port_idx;
+ uint8_t pf_idx;
+ uint8_t dev_type;
+ uint8_t rev;
+ uint32_t cap_flags;
+};
+
+struct sxe2_drv_dev_info_resp {
+ uint64_t dsn;
+ uint16_t vsi_id;
+ uint8_t rsv[2];
+ uint8_t mac_addr[ETH_ALEN];
+ uint8_t rsv2[2];
+};
+
+struct sxe2_drv_dev_fw_info_resp {
+ uint8_t main_version_id;
+ uint8_t sub_version_id;
+ uint8_t fix_version_id;
+ uint8_t build_id;
+};
+
+struct sxe2_drv_rxq_ctxt {
+ uint64_t dma_addr;
+ uint32_t max_lro_size;
+ uint32_t split_type_mask;
+ uint16_t hdr_len;
+ uint16_t buf_len;
+ uint16_t depth;
+ uint16_t queue_id;
+ uint8_t lro_en;
+ uint8_t keep_crc_en;
+ uint8_t split_en;
+ uint8_t desc_size;
+};
+
+struct sxe2_drv_rxq_cfg_req {
+ uint16_t q_cnt;
+ uint16_t vsi_id;
+ uint16_t max_frame_size;
+ uint8_t rsv[2];
+ struct sxe2_drv_rxq_ctxt cfg[];
+};
+
+struct sxe2_drv_txq_ctxt {
+ uint64_t dma_addr;
+ uint32_t sched_mode;
+ uint16_t queue_id;
+ uint16_t depth;
+ uint16_t vsi_id;
+ uint8_t rsv[2];
+};
+
+struct sxe2_drv_txq_cfg_req {
+ uint16_t q_cnt;
+ uint16_t vsi_id;
+ struct sxe2_drv_txq_ctxt cfg[];
+};
+
+struct sxe2_drv_q_switch_req {
+ uint16_t q_idx;
+ uint16_t vsi_id;
+ uint8_t is_enable;
+ uint8_t sched_mode;
+ uint8_t rsv[2];
+};
+
+struct sxe2_drv_vsi_create_req_resp {
+ uint16_t vsi_id;
+ uint16_t vsi_type;
+ struct sxe2_drv_queue_caps used_queues;
+ struct sxe2_drv_msix_caps used_msix;
+};
+
+struct sxe2_drv_vsi_free_req {
+ uint16_t vsi_id;
+ uint8_t rsv[2];
+};
+
+struct sxe2_drv_vsi_info_get_req {
+ uint16_t vsi_id;
+ uint8_t rsv[2];
+};
+
+struct sxe2_drv_vsi_info_get_resp {
+ uint16_t vsi_id;
+ uint16_t vsi_type;
+ struct sxe2_drv_queue_caps used_queues;
+ struct sxe2_drv_msix_caps used_msix;
+};
+
+enum sxe2_drv_cmd_module {
+ SXE2_DRV_CMD_MODULE_HANDSHAKE = 0,
+ SXE2_DRV_CMD_MODULE_DEV = 1,
+ SXE2_DRV_CMD_MODULE_VSI = 2,
+ SXE2_DRV_CMD_MODULE_QUEUE = 3,
+ SXE2_DRV_CMD_MODULE_STATS = 4,
+ SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5,
+ SXE2_DRV_CMD_MODULE_RSS = 6,
+ SXE2_DRV_CMD_MODULE_FLOW = 7,
+ SXE2_DRV_CMD_MODULE_TM = 8,
+ SXE2_DRV_CMD_MODULE_IPSEC = 9,
+ SXE2_DRV_CMD_MODULE_PTP = 10,
+
+ SXE2_DRV_CMD_MODULE_VLAN = 11,
+ SXE2_DRV_CMD_MODULE_RDMA = 12,
+ SXE2_DRV_CMD_MODULE_LINK = 13,
+ SXE2_DRV_CMD_MODULE_MACADDR = 14,
+ SXE2_DRV_CMD_MODULE_PROMISC = 15,
+
+ SXE2_DRV_CMD_MODULE_LED = 16,
+ SXE2_DEV_CMD_MODULE_OPT = 17,
+ SXE2_DEV_CMD_MODULE_SWITCH = 18,
+ SXE2_DRV_CMD_MODULE_ACL = 19,
+ SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20,
+ SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21,
+
+ SXE2_DRV_CMD_MODULE_SCHED = 22,
+
+ SXE2_DRV_CMD_MODULE_IRQ = 23,
+
+ SXE2_DRV_CMD_MODULE_OPT = 24,
+};
+
+enum sxe2_drv_cmd_code {
+ SXE2_DRV_CMD_HANDSHAKE_ENABLE =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1),
+ SXE2_DRV_CMD_HANDSHAKE_DISABLE,
+
+ SXE2_DRV_CMD_DEV_GET_CAPS =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1),
+ SXE2_DRV_CMD_DEV_GET_INFO,
+ SXE2_DRV_CMD_DEV_GET_FW_INFO,
+ SXE2_DRV_CMD_DEV_RESET,
+ SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO,
+
+ SXE2_DRV_CMD_VSI_CREATE =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1),
+ SXE2_DRV_CMD_VSI_FREE,
+ SXE2_DRV_CMD_VSI_INFO_GET,
+ SXE2_DRV_CMD_VSI_SRCVSI_PRUNE,
+ SXE2_DRV_CMD_VSI_FC_GET,
+
+ SXE2_DRV_CMD_RX_MAP_SET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1),
+ SXE2_DRV_CMD_TX_MAP_SET,
+ SXE2_DRV_CMD_TX_RX_MAP_GET,
+ SXE2_DRV_CMD_TX_RX_MAP_RESET,
+ SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR,
+
+ SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1),
+ SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE,
+ SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE,
+ SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE,
+ SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE,
+
+ SXE2_DRV_CMD_RXQ_CFG_ENABLE =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1),
+ SXE2_DRV_CMD_TXQ_CFG_ENABLE,
+ SXE2_DRV_CMD_RXQ_DISABLE,
+ SXE2_DRV_CMD_TXQ_DISABLE,
+
+ SXE2_DRV_CMD_VSI_STATS_GET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1),
+ SXE2_DRV_CMD_VSI_STATS_CLEAR,
+ SXE2_DRV_CMD_MAC_STATS_GET,
+ SXE2_DRV_CMD_MAC_STATS_CLEAR,
+
+ SXE2_DRV_CMD_RSS_KEY_SET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1),
+ SXE2_DRV_CMD_RSS_LUT_SET,
+ SXE2_DRV_CMD_RSS_FUNC_SET,
+ SXE2_DRV_CMD_RSS_HF_ADD,
+ SXE2_DRV_CMD_RSS_HF_DEL,
+ SXE2_DRV_CMD_RSS_HF_CLEAR,
+
+ SXE2_DRV_CMD_FLOW_FILTER_ADD =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1),
+ SXE2_DRV_CMD_FLOW_FILTER_DEL,
+ SXE2_DRV_CMD_FLOW_FILTER_CLEAR,
+ SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC,
+ SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE,
+ SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY,
+
+ SXE2_DRV_CMD_DEL_TM_ROOT =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1),
+ SXE2_DRV_CMD_ADD_TM_ROOT,
+ SXE2_DRV_CMD_ADD_TM_NODE,
+ SXE2_DRV_CMD_ADD_TM_QUEUE,
+
+ SXE2_DRV_CMD_GET_PTP_CLOCK =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1),
+
+ SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1),
+ SXE2_DRV_CMD_VLAN_FILTER_SWITCH,
+ SXE2_DRV_CMD_VLAN_OFFLOAD_CFG,
+ SXE2_DRV_CMD_VLAN_PORTVLAN_CFG,
+ SXE2_DRV_CMD_VLAN_CFG_QUERY,
+
+ SXE2_DRV_CMD_RDMA_DUMP_PCAP =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1),
+
+ SXE2_DRV_CMD_LINK_STATUS_GET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1),
+
+ SXE2_DRV_CMD_MAC_ADDR_UC =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1),
+ SXE2_DRV_CMD_MAC_ADDR_MC,
+
+ SXE2_DRV_CMD_PROMISC_CFG =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1),
+ SXE2_DRV_CMD_ALLMULTI_CFG,
+
+ SXE2_DRV_CMD_LED_CTRL =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1),
+
+ SXE2_DRV_CMD_OPT_EEP =
+ SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1),
+
+ SXE2_DRV_CMD_SWITCH =
+ SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1),
+ SXE2_DRV_CMD_SWITCH_UPLINK,
+ SXE2_DRV_CMD_SWITCH_REPR,
+ SXE2_DRV_CMD_SWITCH_MODE,
+ SXE2_DRV_CMD_SWITCH_CPVSI,
+
+ SXE2_DRV_CMD_UDPTUNNEL_ADD =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1),
+ SXE2_DRV_CMD_UDPTUNNEL_DEL,
+ SXE2_DRV_CMD_UDPTUNNEL_GET,
+
+ SXE2_DRV_CMD_IPSEC_CAP_GET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1),
+ SXE2_DRV_CMD_IPSEC_TXSA_ADD,
+ SXE2_DRV_CMD_IPSEC_RXSA_ADD,
+ SXE2_DRV_CMD_IPSEC_TXSA_DEL,
+ SXE2_DRV_CMD_IPSEC_RXSA_DEL,
+ SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR,
+
+ SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1),
+
+ SXE2_DRV_CMD_OPT_EEP_GET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1),
+
+};
+
+#endif /* __SXE2_DRV_CMD_H__ */
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
new file mode 100644
index 0000000000..f0bdda38a7
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -0,0 +1,613 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_string_fns.h>
+#include <ethdev_pci.h>
+#include <ctype.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <unistd.h>
+#include <rte_tailq.h>
+#include <rte_version.h>
+#include <bus_pci_driver.h>
+#include <dev_driver.h>
+#include <ethdev_driver.h>
+#include <rte_ethdev.h>
+#include <rte_alarm.h>
+#include <rte_dev_info.h>
+#include <rte_pci.h>
+#include <rte_mbuf_dyn.h>
+#include <rte_cycles.h>
+#include <rte_eal_paging.h>
+
+#include "sxe2_ethdev.h"
+#include "sxe2_drv_cmd.h"
+#include "sxe2_cmd_chnl.h"
+#include "sxe2_common.h"
+#include "sxe2_common_log.h"
+#include "sxe2_host_regs.h"
+#include "sxe2_ioctl_chnl_func.h"
+
+#define SXE2_PCI_VENDOR_ID_1 0x1ff2
+#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1
+#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2
+
+#define SXE2_PCI_VENDOR_ID_2 0x1d94
+#define SXE2_PCI_DEVICE_ID_PF_2 0x1260
+#define SXE2_PCI_DEVICE_ID_VF_2 0x126f
+
+#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3
+#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4
+
+#define SXE2_PCI_VENDOR_ID_206F 0x206f
+
+static const struct rte_pci_id pci_id_sxe2_tbl[] = {
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)},
+ { .vendor_id = 0, },
+};
+
+static int32_t sxe2_dev_configure(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ PMD_INIT_FUNC_TRACE();
+
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ return ret;
+}
+
+static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused)
+{
+}
+
+static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused)
+{
+}
+
+static int32_t sxe2_dev_stop(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ PMD_INIT_FUNC_TRACE();
+
+ if (adapter->started == 0)
+ goto l_end;
+
+ sxe2_txqs_all_stop(dev);
+ sxe2_rxqs_all_stop(dev);
+
+ dev->data->dev_started = 0;
+ adapter->started = 0;
+l_end:
+ return ret;
+}
+
+static int32_t __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+static int32_t __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+static int32_t sxe2_queues_start(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ ret = sxe2_txqs_all_start(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to start tx queue.");
+ goto l_end;
+ }
+
+ ret = sxe2_rxqs_all_start(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to start rx queue.");
+ sxe2_txqs_all_stop(dev);
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_dev_start(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ PMD_INIT_FUNC_TRACE();
+
+ ret = sxe2_queues_init(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to init queues.");
+ goto l_end;
+ }
+
+ ret = sxe2_queues_start(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "enable queues failed");
+ goto l_end;
+ }
+
+ dev->data->dev_started = 1;
+ adapter->started = 1;
+ goto l_end;
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_dev_close(struct rte_eth_dev *dev)
+{
+ (void)sxe2_dev_stop(dev);
+
+ sxe2_vsi_uninit(dev);
+
+ return 0;
+}
+
+static int32_t sxe2_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi;
+
+ dev_info->max_rx_queues = vsi->rxqs.q_cnt;
+ dev_info->max_tx_queues = vsi->txqs.q_cnt;
+ dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE;
+ dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX;
+ dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM;
+ dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD;
+ dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+ dev_info->rx_offload_capa =
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO;
+
+ dev_info->tx_offload_capa =
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
+
+ dev_info->rx_queue_offload_capa =
+ RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO;
+ dev_info->tx_queue_offload_capa =
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
+
+ dev_info->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_thresh = {
+ .pthresh = SXE2_DEFAULT_RX_PTHRESH,
+ .hthresh = SXE2_DEFAULT_RX_HTHRESH,
+ .wthresh = SXE2_DEFAULT_RX_WTHRESH,
+ },
+ .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH,
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ dev_info->default_txconf = (struct rte_eth_txconf) {
+ .tx_thresh = {
+ .pthresh = SXE2_DEFAULT_TX_PTHRESH,
+ .hthresh = SXE2_DEFAULT_TX_HTHRESH,
+ .wthresh = SXE2_DEFAULT_TX_WTHRESH,
+ },
+ .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH,
+ .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH,
+ .offloads = 0,
+ };
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = SXE2_MAX_RING_DESC,
+ .nb_min = SXE2_MIN_RING_DESC,
+ .nb_align = SXE2_ALIGN,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = SXE2_MAX_RING_DESC,
+ .nb_min = SXE2_MIN_RING_DESC,
+ .nb_align = SXE2_ALIGN,
+ .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX,
+ .nb_seg_max = SXE2_MAX_RING_DESC,
+ };
+
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
+
+ dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST;
+ dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST;
+ dev_info->default_rxportconf.nb_queues = 1;
+ dev_info->default_txportconf.nb_queues = 1;
+ dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN;
+ dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN;
+
+ dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG;
+
+ dev_info->rx_seg_capa.multi_pools = true;
+
+ dev_info->rx_seg_capa.offset_allowed = false;
+
+ dev_info->rx_seg_capa.offset_align_log2 = false;
+
+ return 0;
+}
+
+static const struct eth_dev_ops sxe2_eth_dev_ops = {
+ .dev_configure = sxe2_dev_configure,
+ .dev_start = sxe2_dev_start,
+ .dev_stop = sxe2_dev_stop,
+ .dev_close = sxe2_dev_close,
+ .dev_infos_get = sxe2_dev_infos_get,
+};
+
+static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_caps_resp *dev_caps)
+{
+ adapter->port_idx = dev_caps->port_idx;
+
+ adapter->cap_flags = 0;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE;
+}
+
+static int32_t sxe2_func_caps_get(struct sxe2_adapter *adapter)
+{
+ int32_t ret = -1;
+ struct sxe2_drv_dev_caps_resp dev_caps = {0};
+
+ ret = sxe2_drv_dev_caps_get(adapter, &dev_caps);
+ if (ret)
+ goto l_end;
+
+ adapter->dev_type = dev_caps.dev_type;
+
+ sxe2_drv_dev_caps_set(adapter, &dev_caps);
+
+ sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps);
+
+ sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps);
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_dev_caps_get(struct sxe2_adapter *adapter)
+{
+ int32_t ret = -1;
+
+ ret = sxe2_func_caps_get(adapter);
+ if (ret)
+ PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret);
+
+ return ret;
+}
+
+static int32_t sxe2_hw_init(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ int32_t ret = -1;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = sxe2_dev_caps_get(adapter);
+ if (ret)
+ PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret);
+
+ return ret;
+}
+
+static int32_t sxe2_dev_info_init(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter =
+ SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+ struct sxe2_dev_info *dev_info = &adapter->dev_info;
+ struct sxe2_drv_dev_info_resp dev_info_resp = {0};
+ struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0};
+ int32_t ret = 0;
+
+ dev_info->pci.bus_devid = pci_dev->addr.devid;
+ dev_info->pci.bus_function = pci_dev->addr.function;
+
+ ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret);
+ goto l_end;
+ }
+ dev_info->pci.serial_number = dev_info_resp.dsn;
+
+ ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret);
+ goto l_end;
+ }
+ dev_info->fw.build_id = dev_fw_info_resp.build_id;
+ dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id;
+ dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id;
+ dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id;
+
+ if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr))
+ rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr,
+ (struct rte_ether_addr *)dev_info->mac.perm_addr);
+ else
+ rte_eth_random_addr(dev_info->mac.perm_addr);
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_dev_init(struct rte_eth_dev *dev,
+ struct sxe2_dev_kvargs_info *kvargs __rte_unused)
+{
+ int32_t ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ dev->dev_ops = &sxe2_eth_dev_ops;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ goto l_end;
+
+ ret = sxe2_hw_init(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret);
+ goto l_end;
+ }
+
+ ret = sxe2_vsi_init(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret);
+ goto init_vsi_err;
+ }
+
+ ret = sxe2_dev_info_init(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret);
+ goto init_dev_info_err;
+ }
+
+ goto l_end;
+
+init_dev_info_err:
+ sxe2_vsi_uninit(dev);
+init_vsi_err:
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_dev_uninit(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ goto l_end;
+
+ ret = sxe2_dev_close(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret);
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_eth_pmd_remove(struct sxe2_common_device *cdev)
+{
+ struct rte_eth_dev *eth_dev;
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev);
+ int32_t ret = 0;
+
+ eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+ if (!eth_dev) {
+ PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed");
+ goto l_end;
+ }
+
+ ret = sxe2_dev_uninit(eth_dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret);
+ goto l_end;
+ }
+ (void)rte_eth_dev_release_port(eth_dev);
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev,
+ struct rte_eth_devargs *req_eth_da __rte_unused,
+ uint16_t owner_id __rte_unused,
+ struct sxe2_dev_kvargs_info *kvargs)
+{
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev);
+ struct rte_eth_dev *eth_dev = NULL;
+ struct sxe2_adapter *adapter = NULL;
+ int32_t ret = 0;
+
+ if (!cdev) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter));
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ if (eth_dev == NULL) {
+ PMD_LOG_ERR(INIT, "Can not allocate ethdev");
+ ret = -ENOMEM;
+ goto l_end;
+ }
+ } else {
+ if (!eth_dev) {
+ PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev");
+ ret = -EINVAL;
+ goto l_end;
+ }
+ }
+
+ adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev);
+ adapter->dev_port_id = eth_dev->data->port_id;
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ adapter->cdev = cdev;
+
+ ret = sxe2_dev_init(eth_dev, kvargs);
+ if (ret != 0) {
+ PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret);
+ goto l_release_port;
+ }
+
+ rte_eth_dev_probing_finish(eth_dev);
+ PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!");
+ goto l_end;
+
+l_release_port:
+ (void)rte_eth_dev_release_port(eth_dev);
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_parse_eth_devargs(struct rte_device *dev,
+ struct rte_eth_devargs *eth_da)
+{
+ int ret = 0;
+
+ if (dev->devargs == NULL)
+ return 0;
+
+ memset(eth_da, 0, sizeof(*eth_da));
+
+ if (dev->devargs->cls_str) {
+ ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1);
+ if (ret != 0) {
+ PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s",
+ dev->devargs->cls_str);
+ return -rte_errno;
+ }
+ }
+
+ if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) {
+ ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s",
+ dev->devargs->args);
+ return -rte_errno;
+ }
+ }
+
+ return 0;
+}
+
+static int32_t sxe2_eth_pmd_probe(struct sxe2_common_device *cdev,
+ struct sxe2_dev_kvargs_info *kvargs)
+{
+ struct rte_eth_devargs eth_da = { .nb_ports = 0 };
+ int32_t ret = 0;
+
+ ret = sxe2_parse_eth_devargs(cdev->dev, ð_da);
+ if (ret != 0) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs);
+
+l_end:
+ return ret;
+}
+
+static struct sxe2_class_driver sxe2_eth_pmd = {
+ .drv_class = SXE2_CLASS_TYPE_ETH,
+ .name = "SXE2_ETH_PMD_DRIVER_NAME",
+ .probe = sxe2_eth_pmd_probe,
+ .remove = sxe2_eth_pmd_remove,
+ .id_table = pci_id_sxe2_tbl,
+ .intr_lsc = 1,
+ .intr_rmv = 1,
+};
+
+RTE_INIT(rte_sxe2_pmd_init)
+{
+ sxe2_common_init();
+ sxe2_class_driver_register(&sxe2_eth_pmd);
+}
+
+RTE_PMD_EXPORT_NAME(net_sxe2);
+RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl);
+RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2");
+
+RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, tx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE);
diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h
new file mode 100644
index 0000000000..de6d2e9d6c
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_ethdev.h
@@ -0,0 +1,293 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+#ifndef __SXE2_ETHDEV_H__
+#define __SXE2_ETHDEV_H__
+#include <rte_compat.h>
+#include <rte_kvargs.h>
+#include <rte_time.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+#include <rte_tm_driver.h>
+#include <rte_io.h>
+
+#include "sxe2_common.h"
+#include "sxe2_vsi.h"
+#include "sxe2_queue.h"
+#include "sxe2_irq.h"
+#include "sxe2_osal.h"
+
+struct sxe2_link_msg {
+ uint32_t speed;
+ uint8_t status;
+};
+
+enum sxe2_fnav_tunnel_flag_type {
+ SXE2_FNAV_TUN_FLAG_NO_TUNNEL,
+ SXE2_FNAV_TUN_FLAG_TUNNEL,
+ SXE2_FNAV_TUN_FLAG_ANY,
+};
+
+#define SXE2_VF_MAX_NUM 256
+#define SXE2_VSI_MAX_NUM 768
+#define SXE2_FRAME_SIZE_MAX 9832
+#define SXE2_VLAN_TAG_SIZE 4
+#define SXE2_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE)
+#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD)
+
+#ifdef SXE2_TEST
+#define SXE2_RESET_ACTIVE_WAIT_COUNT (5)
+#else
+#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000)
+#endif
+#define SXE2_NO_ACTIVE_CNT (10)
+
+#define SXE2_WOKER_DELAY_5MS (5)
+#define SXE2_WOKER_DELAY_10MS (10)
+#define SXE2_WOKER_DELAY_20MS (20)
+#define SXE2_WOKER_DELAY_30MS (30)
+
+#define SXE2_RESET_DETEC_WAIT_COUNT (100)
+#define SXE2_RESET_DONE_WAIT_COUNT (250)
+#define SXE2_RESET_WAIT_MS (10)
+
+#define SXE2_RESET_WAIT_MIN (10)
+#define SXE2_RESET_WAIT_MAX (20)
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((uint32_t)((n) & 0xffffffff))
+
+#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0
+#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2
+#define SXE2_MODULE_TYPE_SFP 0x03
+#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D
+#define SXE2_MODULE_TYPE_QSFP28 0x11
+#define SXE2_MODULE_SFF_ADDR_MODE 0x04
+#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40
+#define SXE2_MODULE_REVISION_ADDR 0x01
+#define SXE2_MODULE_SFF_8472_COMP 0x5E
+#define SXE2_MODULE_SFF_8472_SWAP 0x5C
+#define SXE2_MODULE_QSFP_MAX_LEN 640
+#define SXE2_MODULE_SFF_8472_UNSUP 0x0
+#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40
+#define SXE2_MODULE_SFF_SFP_TYPE 0x03
+#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D
+#define SXE2_MODULE_TYPE_QSFP28 0x11
+
+#define SXE2_MODULE_SFF_8079 0x1
+#define SXE2_MODULE_SFF_8079_LEN 256
+#define SXE2_MODULE_SFF_8472 0x2
+#define SXE2_MODULE_SFF_8472_LEN 512
+#define SXE2_MODULE_SFF_8636 0x3
+#define SXE2_MODULE_SFF_8636_LEN 256
+#define SXE2_MODULE_SFF_8636_MAX_LEN 640
+#define SXE2_MODULE_SFF_8436 0x4
+#define SXE2_MODULE_SFF_8436_LEN 256
+#define SXE2_MODULE_SFF_8436_MAX_LEN 640
+
+enum sxe2_wk_type {
+ SXE2_WK_MONITOR,
+ SXE2_WK_MONITOR_IM,
+ SXE2_WK_POST,
+ SXE2_WK_MBX,
+};
+
+enum {
+ SXE2_FLAG_LEGACY_RX_ENABLE = 0,
+ SXE2_FLAG_LRO_ENABLE = 1,
+ SXE2_FLAG_RXQ_DISABLED = 2,
+ SXE2_FLAG_TXQ_DISABLED = 3,
+ SXE2_FLAG_DRV_REMOVING = 4,
+ SXE2_FLAG_RESET_DETECTED = 5,
+ SXE2_FLAG_CORE_RESET_DONE = 6,
+ SXE2_FLAG_RESET_ACTIVED = 7,
+ SXE2_FLAG_RESET_PENDING = 8,
+ SXE2_FLAG_RESET_REQUEST = 9,
+ SXE2_FLAGS_RESET_PROCESS_DONE = 10,
+ SXE2_FLAG_RESET_FAILED = 11,
+ SXE2_FLAG_DRV_PROBE_DONE = 12,
+ SXE2_FLAG_NETDEV_REGISTED = 13,
+ SXE2_FLAG_DRV_UP = 15,
+ SXE2_FLAG_DCB_ENABLE = 16,
+ SXE2_FLAG_FLTR_SYNC = 17,
+
+ SXE2_FLAG_EVENT_IRQ_DISABLED = 18,
+ SXE2_FLAG_SUSPEND = 19,
+ SXE2_FLAG_FNAV_ENABLE = 20,
+
+ SXE2_FLAGS_NBITS
+};
+
+struct sxe2_link_context {
+ rte_spinlock_t link_lock;
+ bool link_up;
+ uint32_t speed;
+};
+
+struct sxe2_devargs {
+ uint8_t flow_dup_pattern_mode;
+ uint8_t func_flow_direct_en;
+ uint8_t fnav_stat_type;
+ uint8_t high_performance_mode;
+ uint8_t sched_layer_mode;
+ uint8_t sw_stats_en;
+ uint8_t rx_low_latency;
+};
+
+#define SXE2_PCI_MAP_BAR_INVALID ((uint8_t)0xff)
+#define SXE2_PCI_MAP_INVALID_VAL ((uint32_t)0xffffffff)
+
+enum sxe2_pci_map_resource {
+ SXE2_PCI_MAP_RES_INVALID = 0,
+ SXE2_PCI_MAP_RES_DOORBELL_TX,
+ SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL,
+ SXE2_PCI_MAP_RES_IRQ_DYN,
+ SXE2_PCI_MAP_RES_IRQ_ITR,
+ SXE2_PCI_MAP_RES_IRQ_MSIX,
+ SXE2_PCI_MAP_RES_PTP,
+ SXE2_PCI_MAP_RES_MAX_COUNT,
+};
+
+enum sxe2_udp_tunnel_protocol {
+ SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0,
+ SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE,
+ SXE2_UDP_TUNNEL_PROTOCOL_GENEVE,
+ SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4,
+ SXE2_UDP_TUNNEL_PROTOCOL_GTP_U,
+ SXE2_UDP_TUNNEL_PROTOCOL_PFCP,
+ SXE2_UDP_TUNNEL_PROTOCOL_ECPRI,
+ SXE2_UDP_TUNNEL_PROTOCOL_MPLS,
+ SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10,
+ SXE2_UDP_TUNNEL_PROTOCOL_L2TP,
+ SXE2_UDP_TUNNEL_PROTOCOL_TEREDO,
+ SXE2_UDP_TUNNEL_MAX,
+};
+
+struct sxe2_pci_map_addr_info {
+ uint64_t addr_base;
+ uint8_t bar_idx;
+ uint8_t reg_width;
+};
+
+struct sxe2_pci_map_segment_info {
+ enum sxe2_pci_map_resource type;
+ void *addr;
+ uint64_t page_inner_offset;
+ uint64_t len;
+};
+
+struct sxe2_pci_map_bar_info {
+ uint8_t bar_idx;
+ uint8_t map_cnt;
+ struct sxe2_pci_map_segment_info *seg_info;
+};
+
+struct sxe2_pci_map_context {
+ uint8_t bar_cnt;
+ struct sxe2_pci_map_bar_info *bar_info;
+ struct sxe2_pci_map_addr_info *addr_info;
+};
+
+struct sxe2_dev_mac_info {
+ uint8_t perm_addr[ETH_ALEN];
+};
+
+struct sxe2_pci_info {
+ uint64_t serial_number;
+ uint8_t bus_devid;
+ uint8_t bus_function;
+ uint16_t max_vfs;
+};
+
+struct sxe2_fw_info {
+ uint8_t main_version_id;
+ uint8_t sub_version_id;
+ uint8_t fix_version_id;
+ uint8_t build_id;
+};
+
+struct sxe2_dev_info {
+ struct rte_eth_dev_data *dev_data;
+ struct sxe2_pci_info pci;
+ struct sxe2_fw_info fw;
+ struct sxe2_dev_mac_info mac;
+};
+
+enum sxe2_udp_tunnel_status {
+ SXE2_UDP_TUNNEL_DISABLE = 0x0,
+ SXE2_UDP_TUNNEL_ENABLE,
+};
+
+struct sxe2_udp_tunnel_cfg {
+ uint8_t protocol;
+ uint8_t dev_status;
+ uint16_t dev_port;
+ uint16_t dev_ref_cnt;
+
+ uint16_t fw_port;
+ uint8_t fw_status;
+ uint8_t fw_dst_en;
+ uint8_t fw_src_en;
+ uint8_t fw_used;
+};
+
+struct sxe2_udp_tunnel_ctx {
+ struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX];
+ rte_spinlock_t lock;
+};
+
+struct sxe2_repr_context {
+ uint16_t nb_vf;
+ uint16_t nb_repr_vf;
+ struct rte_eth_dev **vf_rep_eth_dev;
+ struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM];
+};
+
+struct sxe2_repr_private_data {
+ struct rte_eth_dev *rep_eth_dev;
+ struct sxe2_adapter *parent_adapter;
+
+ struct sxe2_vsi *cp_vsi;
+ uint16_t repr_q_id;
+
+ uint16_t repr_id;
+ uint16_t repr_pf_id;
+ uint16_t repr_vf_id;
+ uint16_t repr_vf_vsi_id;
+ uint16_t repr_vf_k_vsi_id;
+ uint16_t repr_vf_u_vsi_id;
+};
+
+struct sxe2_sched_hw_cap {
+ uint32_t tm_layers;
+ uint8_t root_max_children;
+ uint8_t prio_max;
+ uint8_t adj_lvl;
+};
+
+struct sxe2_adapter {
+ struct sxe2_common_device *cdev;
+ struct sxe2_dev_info dev_info;
+ struct rte_pci_device *pci_dev;
+ struct sxe2_repr_private_data *repr_priv_data;
+ struct sxe2_pci_map_context map_ctxt;
+ struct sxe2_irq_context irq_ctxt;
+ struct sxe2_queue_context q_ctxt;
+ struct sxe2_vsi_context vsi_ctxt;
+ struct sxe2_devargs devargs;
+ uint16_t dev_port_id;
+ uint64_t cap_flags;
+ enum sxe2_dev_type dev_type;
+ uint32_t ptype_tbl[SXE2_MAX_PTYPE_NUM];
+ struct rte_ether_addr mac_addr;
+ uint8_t port_idx;
+ uint8_t pf_idx;
+ uint32_t tx_mode_flags;
+ uint32_t rx_mode_flags;
+ uint8_t started;
+};
+
+#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \
+ ((struct sxe2_adapter *)(dev)->data->dev_private)
+
+#endif /* __SXE2_ETHDEV_H__ */
diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h
new file mode 100644
index 0000000000..bb96c6d842
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_irq.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_IRQ_H__
+#define __SXE2_IRQ_H__
+
+#include <ethdev_driver.h>
+
+#include "sxe2_drv_cmd.h"
+
+#define SXE2_IRQ_MAX_CNT 2048
+
+#define SXE2_LAN_MSIX_MIN_CNT 1
+
+#define SXE2_EVENT_IRQ_IDX 0
+
+#define SXE2_MAX_INTR_QUEUE_NUM 256
+
+#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16)
+
+#define SXE2_ITR_1000K 1
+#define SXE2_ITR_500K 2
+#define SXE2_ITR_50K 20
+
+#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K)
+#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K)
+
+struct sxe2_fwc_msix_caps;
+struct sxe2_adapter;
+
+struct sxe2_irq_context {
+ struct rte_intr_handle *reset_handle;
+ int32_t reset_event_fd;
+ int32_t other_event_fd;
+
+ uint16_t max_cnt_hw;
+ uint16_t base_idx_in_func;
+
+ uint16_t rxq_avail_cnt;
+ uint16_t rxq_base_idx_in_pf;
+
+ uint16_t rxq_irq_cnt;
+ uint32_t *rxq_msix_idx;
+ int32_t *rxq_event_fd;
+};
+
+#endif /* __SXE2_IRQ_H__ */
diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c
new file mode 100644
index 0000000000..93f8236381
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_queue.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include "sxe2_ethdev.h"
+#include "sxe2_queue.h"
+#include "sxe2_common_log.h"
+
+void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter,
+ struct sxe2_drv_queue_caps *q_caps)
+{
+ adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt;
+ adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf;
+}
+
+int32_t sxe2_queues_init(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ uint16_t buf_size;
+ uint16_t frame_size;
+ struct sxe2_rx_queue *rxq;
+ uint16_t nb_rxq;
+
+ frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD;
+ for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) {
+ rxq = dev->data->rx_queues[nb_rxq];
+ if (!rxq)
+ continue;
+
+ buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM;
+ rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT));
+ rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE);
+ if (frame_size > rxq->rx_buf_len)
+ dev->data->scattered_rx = 1;
+ }
+
+ return ret;
+}
diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h
new file mode 100644
index 0000000000..e587e582fa
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_queue.h
@@ -0,0 +1,191 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_QUEUE_H__
+#define __SXE2_QUEUE_H__
+#include <rte_ethdev.h>
+#include <rte_io.h>
+#include <rte_stdatomic.h>
+#include <ethdev_driver.h>
+
+#include "sxe2_drv_cmd.h"
+#include "sxe2_txrx_common.h"
+
+#define SXE2_PCI_REG_READ(reg) \
+ rte_read32(reg)
+#define SXE2_PCI_REG_WRITE_WC(reg, value) \
+ rte_write32_wc((rte_cpu_to_le_32(value)), reg)
+#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \
+ rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg)
+
+struct sxe2_queue_context {
+ uint16_t qp_cnt_assign;
+ uint16_t base_idx_in_pf;
+
+ uint32_t tx_mode_flags;
+ uint32_t rx_mode_flags;
+};
+
+struct sxe2_tx_buffer {
+ struct rte_mbuf *mbuf;
+
+ uint16_t next_id;
+ uint16_t last_id;
+};
+
+struct sxe2_tx_buffer_vec {
+ struct rte_mbuf *mbuf;
+};
+
+struct sxe2_txq_stats {
+ uint64_t tx_restart;
+ uint64_t tx_busy;
+
+ uint64_t tx_linearize;
+ uint64_t tx_tso_linearize_chk;
+ uint64_t tx_vlan_insert;
+ uint64_t tx_tso_packets;
+ uint64_t tx_tso_bytes;
+ uint64_t tx_csum_none;
+ uint64_t tx_csum_partial;
+ uint64_t tx_csum_partial_inner;
+ uint64_t tx_queue_dropped;
+ uint64_t tx_xmit_more;
+ uint64_t tx_pkts_num;
+ uint64_t tx_desc_not_done;
+};
+
+struct sxe2_tx_queue;
+struct sxe2_txq_ops {
+ void (*queue_reset)(struct sxe2_tx_queue *txq);
+ void (*mbufs_release)(struct sxe2_tx_queue *txq);
+ void (*buffer_ring_free)(struct sxe2_tx_queue *txq);
+};
+struct sxe2_tx_queue {
+ volatile union sxe2_tx_data_desc *desc_ring;
+ struct sxe2_tx_buffer *buffer_ring;
+ volatile uint32_t *tdt_reg_addr;
+
+ uint64_t offloads;
+ uint16_t ring_depth;
+ uint16_t desc_free_num;
+
+ uint16_t free_thresh;
+
+ uint16_t rs_thresh;
+ uint16_t next_use;
+ uint16_t next_clean;
+
+ uint16_t desc_used_num;
+ uint16_t next_dd;
+ uint16_t next_rs;
+ uint16_t ipsec_pkt_md_offset;
+
+ uint16_t port_id;
+ uint16_t queue_id;
+ uint16_t idx_in_func;
+ bool tx_deferred_start;
+ uint8_t pthresh;
+ uint8_t hthresh;
+ uint8_t wthresh;
+ uint16_t reg_idx;
+ uint64_t base_addr;
+ struct sxe2_vsi *vsi;
+ const struct rte_memzone *mz;
+ struct sxe2_txq_ops ops;
+ uint8_t vlan_flag;
+ uint8_t use_ctx:1,
+ res:7;
+};
+struct sxe2_rx_queue;
+struct sxe2_rxq_ops {
+ void (*queue_reset)(struct sxe2_rx_queue *rxq);
+ void (*mbufs_release)(struct sxe2_rx_queue *txq);
+};
+struct sxe2_rxq_stats {
+ uint64_t rx_pkts_num;
+ uint64_t rx_rss_pkt_num;
+ uint64_t rx_fnav_pkt_num;
+ uint64_t rx_ptp_pkt_num;
+ uint32_t rx_vec_align_drop;
+
+ uint32_t rxdid_1588_err;
+ uint32_t ip_csum_err;
+ uint32_t l4_csum_err;
+ uint32_t outer_ip_csum_err;
+ uint32_t outer_l4_csum_err;
+ uint32_t macsec_err;
+ uint32_t ipsec_err;
+
+ uint64_t ptype_pkts[SXE2_MAX_PTYPE_NUM];
+};
+
+struct sxe2_rxq_sw_stats {
+ RTE_ATOMIC(uint64_t)pkts;
+ RTE_ATOMIC(uint64_t)bytes;
+ RTE_ATOMIC(uint64_t)drop_pkts;
+ RTE_ATOMIC(uint64_t)drop_bytes;
+ RTE_ATOMIC(uint64_t)unicast_pkts;
+ RTE_ATOMIC(uint64_t)multicast_pkts;
+ RTE_ATOMIC(uint64_t)broadcast_pkts;
+};
+
+struct sxe2_rx_queue {
+ volatile union sxe2_rx_desc *desc_ring;
+ volatile uint32_t *rdt_reg_addr;
+ struct rte_mempool *mb_pool;
+ struct rte_mbuf **buffer_ring;
+ struct sxe2_vsi *vsi;
+
+ uint64_t offloads;
+ uint16_t ring_depth;
+ uint16_t rx_free_thresh;
+ uint16_t processing_idx;
+ uint16_t hold_num;
+ uint16_t next_ret_pkt;
+ uint16_t batch_alloc_trigger;
+ uint16_t completed_pkts_num;
+ uint64_t update_time;
+ uint32_t desc_ts;
+ uint64_t ts_high;
+ uint32_t ts_low;
+ uint32_t ts_need_update;
+ uint8_t crc_len;
+ bool fnav_enable;
+
+ struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM];
+
+ struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2];
+ struct rte_mbuf *pkt_first_seg;
+ struct rte_mbuf *pkt_last_seg;
+ uint64_t mbuf_init_value;
+ uint16_t realloc_num;
+ uint16_t realloc_start;
+ struct rte_mbuf fake_mbuf;
+
+ const struct rte_memzone *mz;
+ struct sxe2_rxq_ops ops;
+ rte_iova_t base_addr;
+ uint16_t reg_idx;
+ uint32_t low_desc_waterline : 16;
+ uint32_t ldw_event_pending : 1;
+ struct sxe2_rxq_sw_stats sw_stats;
+ uint16_t port_id;
+ uint16_t queue_id;
+ uint16_t idx_in_func;
+ uint16_t rx_buf_len;
+ uint16_t rx_hdr_len;
+ uint16_t max_pkt_len;
+ bool rx_deferred_start;
+ uint8_t drop_en;
+};
+
+struct sxe2_adapter;
+
+void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter,
+ struct sxe2_drv_queue_caps *q_caps);
+
+int32_t sxe2_queues_init(struct rte_eth_dev *dev);
+
+#endif /* __SXE2_QUEUE_H__ */
diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h
new file mode 100644
index 0000000000..63f56e4964
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_common.h
@@ -0,0 +1,540 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef _SXE2_TXRX_COMMON_H_
+#define _SXE2_TXRX_COMMON_H_
+#include <stdbool.h>
+
+#define SXE2_ALIGN_RING_DESC 32
+#define SXE2_MIN_RING_DESC 64
+#define SXE2_MAX_RING_DESC 4096
+
+#define SXE2_VECTOR_PATH 0
+#define SXE2_VECTOR_OFFLOAD_PATH 1
+#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2
+
+#define SXE2_MAX_PTYPE_NUM 1024
+#define SXE2_MIN_BUF_SIZE 1024
+
+#define SXE2_ALIGN 32
+#define SXE2_DESC_ADDR_ALIGN 128
+
+#define SXE2_MIN_TSO_MSS 88
+#define SXE2_MAX_TSO_MSS 9728
+
+#define SXE2_TX_MTU_SEG_MAX 15
+
+#define SXE2_TX_MIN_PKT_LEN 17
+#define SXE2_TX_MAX_BURST 32
+#define SXE2_TX_MAX_FREE_BUF 64
+#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024)
+
+#define DEFAULT_TX_RS_THRESH 32
+#define DEFAULT_TX_FREE_THRESH 32
+
+#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 RTE_BIT32(0)
+#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 RTE_BIT32(1)
+
+#define SXE2_TX_PKTS_BURST_BATCH_NUM 32
+
+union sxe2_tx_offload_info {
+ uint64_t data;
+ struct {
+ uint64_t l2_len:7;
+ uint64_t l3_len:9;
+ uint64_t l4_len:8;
+ uint64_t tso_segsz:16;
+ uint64_t outer_l2_len:8;
+ uint64_t outer_l3_len:16;
+ };
+};
+
+#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \
+ RTE_MBUF_F_TX_UDP_SEG | \
+ RTE_MBUF_F_TX_QINQ | \
+ RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+ RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \
+ RTE_MBUF_F_TX_SEC_OFFLOAD | \
+ RTE_MBUF_F_TX_IEEE1588_TMST)
+
+#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_L4_MASK | \
+ RTE_MBUF_F_TX_TCP_SEG | \
+ RTE_MBUF_F_TX_UDP_SEG | \
+ RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \
+ RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+
+struct sxe2_tx_context_desc {
+ uint32_t tunneling_params;
+ uint16_t l2tag2;
+ uint16_t ipsec_offset;
+ uint64_t type_cmd_tso_mss;
+};
+
+#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2
+#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9
+#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12
+#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23
+
+#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4
+#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11
+#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12
+#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13
+#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16
+#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30
+#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50
+#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50
+
+#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT)
+
+#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \
+ (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT)
+#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \
+ (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT)
+
+enum sxe2_tx_ctxt_desc_eipt_bits {
+ SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0,
+ SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1,
+ SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2,
+ SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3,
+};
+
+enum sxe2_tx_ctxt_desc_l4tunt_bits {
+ SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT,
+ SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT,
+};
+
+enum sxe2_tx_ctxt_desc_cmd_bits {
+ SXE2_TX_CTXT_DESC_CMD_TSO = 0x01,
+ SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02,
+ SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04,
+ SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08,
+ SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00,
+ SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10,
+ SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20,
+ SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30,
+ SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40
+};
+#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT)
+#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT)
+#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT)
+#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \
+ (((uint64_t)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT)
+#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \
+ (((uint64_t)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT)
+
+union sxe2_tx_data_desc {
+ struct {
+ uint64_t buf_addr;
+ uint64_t type_cmd_off_bsz_l2t;
+ } read;
+ struct {
+ uint64_t rsvd;
+ uint64_t dd;
+ } wb;
+};
+
+#define SXE2_TX_DATA_DESC_CMD_SHIFT 4
+#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16
+#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34
+#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48
+
+#define SXE2_TX_DATA_DESC_CMD_MASK \
+ (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT)
+#define SXE2_TX_DATA_DESC_OFFSET_MASK \
+ (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT)
+#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \
+ (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT)
+#define SXE2_TX_DATA_DESC_L2TAG1_MASK \
+ (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)
+
+#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0)
+#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7)
+#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14)
+
+#define SXE2_TX_DESC_DTYPE_MASK 0xF
+#define SXE2_TX_DATA_DESC_MACLEN_MASK \
+ (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define SXE2_TX_DATA_DESC_IPLEN_MASK \
+ (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define SXE2_TX_DATA_DESC_L4LEN_MASK \
+ (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \
+ (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \
+ (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \
+ (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+enum sxe2_tx_desc_type {
+ SXE2_TX_DESC_DTYPE_DATA = 0x0,
+ SXE2_TX_DESC_DTYPE_CTXT = 0x1,
+ SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8,
+ SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF,
+};
+
+enum sxe2_tx_data_desc_cmd_bits {
+ SXE2_TX_DATA_DESC_CMD_EOP = 0x0001,
+ SXE2_TX_DATA_DESC_CMD_RS = 0x0002,
+ SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004,
+ SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008,
+ SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010,
+ SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020,
+ SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040,
+ SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060,
+ SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100,
+ SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200,
+ SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300,
+ SXE2_TX_DATA_DESC_CMD_RE = 0x0400
+};
+#define SXE2_TX_DATA_DESC_CMD_RS_MASK \
+ (((uint64_t)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT)
+
+#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL
+
+#define SXE2_TX_DESC_RING_ALIGN \
+ (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc))
+
+#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF
+
+#define SXE2_TX_FILL_PER_LOOP 4
+#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1)
+#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64)
+
+#define SXE2_RX_MAX_BURST 32
+#define SXE2_RING_SIZE_MIN 1024
+#define SXE2_RX_MAX_NSEG 2
+
+#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST
+#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST
+
+#define SXE2_RXQ_CTX_DBUFF_SHIFT 7
+
+#define SXE2_RX_NUM_PER_LOOP 8
+
+#define SXE2_RX_FLEX_DESC_PTYPE_S (16)
+#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL)
+
+#define SXE2_RX_HBUF_LEN_UNIT 6
+#define SXE2_RX_LDW_LEN_UNIT 6
+#define SXE2_RX_DBUF_LEN_UNIT 7
+#define SXE2_RX_DBUF_LEN_MASK (~0x7F)
+
+#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200
+
+#define SXE2_RX_VECTOR_OFFLOAD ( \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH | \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+
+#define SXE2_DEFAULT_RX_FREE_THRESH 32
+#define SXE2_DEFAULT_RX_PTHRESH 8
+#define SXE2_DEFAULT_RX_HTHRESH 8
+#define SXE2_DEFAULT_RX_WTHRESH 0
+
+#define SXE2_DEFAULT_TX_FREE_THRESH 32
+#define SXE2_DEFAULT_TX_PTHRESH 32
+#define SXE2_DEFAULT_TX_HTHRESH 0
+#define SXE2_DEFAULT_TX_WTHRESH 0
+#define SXE2_DEFAULT_TX_RSBIT_THRESH 32
+
+#define SXE2_RX_SEG_NUM 2
+
+#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC
+#define sxe2_rx_desc sxe2_rx_16b_desc
+#else
+#define sxe2_rx_desc sxe2_rx_32b_desc
+#endif
+
+union sxe2_rx_16b_desc {
+ struct {
+ uint64_t pkt_addr;
+ uint64_t hdr_addr;
+ } read;
+ struct {
+ uint8_t rxdid_src;
+ uint8_t mirror;
+ uint16_t l2tag1;
+ uint32_t filter_status;
+
+ uint64_t status_err_ptype_len;
+ } wb;
+};
+
+union sxe2_rx_32b_desc {
+ struct {
+ uint64_t pkt_addr;
+ uint64_t hdr_addr;
+ uint64_t rsvd1;
+ uint64_t rsvd2;
+ } read;
+ struct {
+ uint8_t rxdid_src;
+ uint8_t mirror;
+ uint16_t l2tag1;
+ uint32_t filter_status;
+
+ uint64_t status_err_ptype_len;
+
+ uint32_t status_lrocnt_fdpf_id;
+ uint16_t l2tag2_1st;
+ uint16_t l2tag2_2nd;
+
+ uint8_t acl_pf_id;
+ uint8_t sw_pf_id;
+ uint16_t flow_id;
+
+ uint32_t fd_filter_id;
+
+ } wb;
+ struct {
+ uint8_t rxdid_src_fd_eudpe;
+ uint8_t mirror;
+ uint16_t l2_tag1;
+ uint32_t filter_status;
+
+ uint64_t status_err_ptype_len;
+
+ uint32_t ext_status_ts_low;
+ uint16_t l2tag2_1st;
+ uint16_t l2tag2_2nd;
+
+ uint32_t ts_h;
+ uint32_t fd_filter_id;
+
+ } wb_ts;
+};
+
+enum sxe2_rx_lro_desc_max_num {
+ SXE2_RX_LRO_DESC_MAX_1 = 1,
+ SXE2_RX_LRO_DESC_MAX_4 = 4,
+ SXE2_RX_LRO_DESC_MAX_8 = 8,
+ SXE2_RX_LRO_DESC_MAX_16 = 16,
+ SXE2_RX_LRO_DESC_MAX_32 = 32,
+ SXE2_RX_LRO_DESC_MAX_48 = 48,
+ SXE2_RX_LRO_DESC_MAX_64 = 64,
+ SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64,
+};
+
+enum sxe2_rx_desc_rxdid {
+ SXE2_RX_DESC_RXDID_16B = 0,
+ SXE2_RX_DESC_RXDID_32B,
+ SXE2_RX_DESC_RXDID_1588,
+ SXE2_RX_DESC_RXDID_FD,
+};
+
+#define SXE2_RX_DESC_RXDID_SHIFT (0)
+#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT)
+#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \
+ (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT)
+
+#define SXE2_RX_DESC_PKT_SRC_SHIFT (3)
+#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT)
+#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \
+ (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT)
+
+#define SXE2_RX_DESC_FD_VLD_SHIFT (5)
+#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT)
+#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \
+ (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT)
+
+#define SXE2_RX_DESC_EUDPE_SHIFT (6)
+#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT)
+#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \
+ (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT)
+
+#define SXE2_RX_DESC_UDP_NET_SHIFT (7)
+#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT)
+#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \
+ (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT)
+
+#define SXE2_RX_DESC_MIRR_ID_SHIFT (0)
+#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT)
+#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \
+ (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT)
+
+#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6)
+#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT)
+#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \
+ (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT)
+
+#define SXE2_RX_DESC_PKT_LEN_SHIFT (32)
+#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT)
+#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \
+ (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT)
+
+#define SXE2_RX_DESC_HDR_LEN_SHIFT (46)
+#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT)
+#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \
+ (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT)
+
+#define SXE2_RX_DESC_SPH_SHIFT (57)
+#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT)
+#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \
+ (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT)
+
+#define SXE2_RX_DESC_PTYPE_SHIFT (16)
+#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT)
+#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL)
+#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \
+ (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT)
+
+#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32)
+#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL)
+
+#define SXE2_RX_DESC_LROCNT_SHIFT (0)
+#define SXE2_RX_DESC_LROCNT_MASK (0xF)
+
+enum sxe2_rx_desc_status_shift {
+ SXE2_RX_DESC_STATUS_DD_SHIFT = 0,
+ SXE2_RX_DESC_STATUS_EOP_SHIFT = 1,
+ SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2,
+
+ SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3,
+ SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4,
+ SXE2_RX_DESC_STATUS_SECP_SHIFT = 5,
+ SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6,
+ SXE2_RX_DESC_STATUS_SECE_SHIFT = 26,
+ SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27,
+ SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28,
+ SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30,
+ SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59,
+ SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60,
+ SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61,
+ SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62,
+ SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63,
+};
+
+#define SXE2_RX_DESC_STATUS_DD_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT)
+#define SXE2_RX_DESC_STATUS_EOP_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT)
+#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT)
+#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT)
+#define SXE2_RX_DESC_STATUS_CRCP_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT)
+#define SXE2_RX_DESC_STATUS_SECP_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT)
+#define SXE2_RX_DESC_STATUS_SECTAG_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT)
+#define SXE2_RX_DESC_STATUS_SECE_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT)
+#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT)
+#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \
+ (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT)
+#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \
+ (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT)
+#define SXE2_RX_DESC_STATUS_LPBK_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT)
+#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT)
+#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT)
+#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT)
+#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT)
+
+enum sxe2_rx_desc_umbcast_val {
+ SXE2_RX_DESC_STATUS_UNICAST = 0,
+ SXE2_RX_DESC_STATUS_MUTICAST = 1,
+ SXE2_RX_DESC_STATUS_BOARDCAST = 2,
+};
+
+#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \
+ (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT)
+
+enum sxe2_rx_desc_error_shift {
+ SXE2_RX_DESC_ERROR_RXE_SHIFT = 7,
+ SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8,
+ SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9,
+
+ SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10,
+
+ SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11,
+
+ SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12,
+ SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13,
+ SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14,
+};
+
+#define SXE2_RX_DESC_ERROR_RXE_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT)
+#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT)
+#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT)
+#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT)
+#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT)
+#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT)
+#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT)
+
+#define SXE2_RX_DESC_QW1_ERRORS_MASK \
+ (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \
+ SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \
+ SXE2_RX_DESC_ERROR_CSUM_EIP_MASK)
+
+enum sxe2_rx_desc_ext_status_shift {
+ SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4,
+ SXE2_RX_DESC_EXT_STATUS_RSVD = 5,
+ SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7,
+ SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13,
+};
+#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \
+ (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)
+#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \
+ (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT)
+#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \
+ (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT)
+
+enum sxe2_rx_desc_ipsec_shift {
+ SXE2_RX_DESC_IPSEC_PKT_S = 21,
+ SXE2_RX_DESC_IPSEC_ENGINE_S = 22,
+ SXE2_RX_DESC_IPSEC_MODE_S = 23,
+ SXE2_RX_DESC_IPSEC_STATUS_S = 24,
+
+ SXE2_RX_DESC_IPSEC_LAST
+};
+
+enum sxe2_rx_desc_ipsec_status {
+ SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0,
+ SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1,
+ SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2,
+ SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3,
+ SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4,
+ SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5,
+ SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6,
+ SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7,
+};
+
+#define SXE2_RX_DESC_IPSEC_PKT_MASK \
+ (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S)
+#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7)
+#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \
+ (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \
+ SXE2_RX_DESC_IPSEC_STATUS_MASK)
+
+#define SXE2_RX_ERR_BITS 0x3f
+
+#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4
+
+#define SXE2_RX_DESC_RING_ALIGN \
+ (SXE2_ALIGN / sizeof(union sxe2_rx_desc))
+
+#define SXE2_RX_RING_SIZE \
+ ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc))
+
+#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128)
+
+#endif /* __SXE2_TXRX_COMMON_H__ */
diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h
new file mode 100644
index 0000000000..f45e33f9b7
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_poll.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef SXE2_TXRX_POLL_H
+#define SXE2_TXRX_POLL_H
+
+#include "sxe2_queue.h"
+
+uint16_t sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+uint16_t sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+uint16_t sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* __SXE2_TXRX_POLL_H__ */
diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c
new file mode 100644
index 0000000000..baaa20c02e
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_vsi.c
@@ -0,0 +1,214 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_os.h>
+#include <rte_tailq.h>
+#include <rte_malloc.h>
+#include "sxe2_ethdev.h"
+#include "sxe2_vsi.h"
+#include "sxe2_common_log.h"
+#include "sxe2_cmd_chnl.h"
+
+void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter,
+ struct sxe2_drv_vsi_caps *vsi_caps)
+{
+ adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id;
+ adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id;
+ adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type;
+}
+
+static struct sxe2_vsi *
+sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, uint16_t vsi_id, uint16_t vsi_type)
+{
+ struct sxe2_vsi *vsi = NULL;
+ vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0);
+ if (vsi == NULL) {
+ PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct.");
+ goto l_end;
+ }
+ vsi->adapter = adapter;
+
+ vsi->vsi_id = vsi_id;
+ vsi->vsi_type = vsi_type;
+
+l_end:
+ return vsi;
+}
+
+static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, uint16_t num_queues, uint16_t base_idx)
+{
+ vsi->txqs.q_cnt = num_queues;
+ vsi->rxqs.q_cnt = num_queues;
+ vsi->txqs.base_idx_in_func = base_idx;
+ vsi->rxqs.base_idx_in_func = base_idx;
+}
+
+static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi)
+{
+ vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC;
+ vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC;
+
+ PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.",
+ vsi->vsi_id, vsi->txqs.q_cnt,
+ vsi->txqs.depth, vsi->rxqs.depth);
+}
+
+static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, uint16_t num_irqs, uint16_t base_idx)
+{
+ vsi->irqs.avail_cnt = num_irqs;
+ vsi->irqs.base_idx_in_pf = base_idx;
+}
+
+static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter,
+ uint16_t vsi_id,
+ uint16_t vsi_type)
+{
+ struct sxe2_vsi *vsi = NULL;
+ uint16_t num_queues = 0;
+ uint16_t queue_base_idx = 0;
+ uint16_t num_irqs = 0;
+ uint16_t irq_base_idx = 0;
+
+ vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type);
+ if (vsi == NULL)
+ goto l_end;
+
+ if (vsi_type == SXE2_VSI_T_DPDK_PF ||
+ vsi_type == SXE2_VSI_T_DPDK_VF) {
+ num_queues = adapter->q_ctxt.qp_cnt_assign;
+ queue_base_idx = adapter->q_ctxt.base_idx_in_pf;
+
+ num_irqs = adapter->irq_ctxt.max_cnt_hw;
+ irq_base_idx = adapter->irq_ctxt.base_idx_in_func;
+ } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) {
+ num_queues = 1;
+ num_irqs = 1;
+ }
+
+ sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx);
+
+ sxe2_vsi_queues_cfg(vsi);
+
+ sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx);
+
+l_end:
+ return vsi;
+}
+
+static void sxe2_vsi_node_free(struct sxe2_vsi *vsi)
+{
+ if (!vsi)
+ return;
+
+ rte_free(vsi);
+ vsi = NULL;
+}
+
+static int32_t sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi)
+{
+ int32_t ret = 0;
+
+ if (vsi == NULL) {
+ PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy.");
+ goto l_end;
+ }
+
+ if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) {
+ ret = sxe2_drv_vsi_del(adapter, vsi);
+ if (ret) {
+ PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret);
+ if (ret == -EPERM)
+ goto l_free;
+ goto l_end;
+ }
+ }
+
+l_free:
+ rte_free(vsi);
+ vsi = NULL;
+
+ PMD_LOG_DEBUG(DRV, "vsi destroyed.");
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_main_vsi_create(struct sxe2_adapter *adapter)
+{
+ int32_t ret = 0;
+ uint16_t vsi_id = adapter->vsi_ctxt.dpdk_vsi_id;
+ uint16_t vsi_type = adapter->vsi_ctxt.vsi_type;
+ bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID);
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (!is_reused)
+ vsi_type = SXE2_VSI_T_DPDK_PF;
+ else
+ PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id);
+
+ adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type);
+ if (adapter->vsi_ctxt.main_vsi == NULL) {
+ PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret);
+ ret = -ENOMEM;
+ goto l_end;
+ }
+
+ if (!is_reused) {
+ ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi);
+ if (ret) {
+ PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret);
+ goto l_free_vsi;
+ }
+
+ adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id;
+ PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI");
+ }
+
+ goto l_end;
+
+l_free_vsi:
+ sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi);
+ adapter->vsi_ctxt.main_vsi = NULL;
+l_end:
+ return ret;
+}
+
+int32_t sxe2_vsi_init(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ int32_t ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = sxe2_main_vsi_create(adapter);
+ if (ret) {
+ PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret);
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
+
+void sxe2_vsi_uninit(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ int32_t ret;
+
+ if (adapter->vsi_ctxt.main_vsi == NULL) {
+ PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy.");
+ goto l_end;
+ }
+
+ ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi);
+ if (ret) {
+ PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret);
+ goto l_end;
+ }
+
+ PMD_LOG_DEBUG(DRV, "vsi destroyed.");
+
+l_end:
+ return;
+}
diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h
new file mode 100644
index 0000000000..e712f738f1
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_vsi.h
@@ -0,0 +1,204 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __sxe2_VSI_H__
+#define __sxe2_VSI_H__
+#include <rte_os.h>
+#include "sxe2_drv_cmd.h"
+
+#define SXE2_MAX_BOND_MEMBER_CNT 4
+
+enum sxe2_drv_type {
+ SXE2_MAX_DRV_TYPE_DPDK = 0,
+ SXE2_MAX_DRV_TYPE_KERNEL,
+ SXE2_MAX_DRV_TYPE_CNT,
+};
+
+#define SXE2_MAX_USER_PRIORITY (8)
+
+#define SXE2_DFLT_NUM_RX_DESC 512
+#define SXE2_DFLT_NUM_TX_DESC 512
+
+#define SXE2_DFLT_Q_NUM_OTHER_VSI 1
+#define SXE2_INVALID_VSI_ID 0xFFFF
+
+struct sxe2_adapter;
+struct sxe2_drv_vsi_caps;
+struct rte_eth_dev;
+
+enum sxe2_vsi_type {
+ SXE2_VSI_T_PF = 0,
+ SXE2_VSI_T_VF,
+ SXE2_VSI_T_CTRL,
+ SXE2_VSI_T_LB,
+ SXE2_VSI_T_MACVLAN,
+ SXE2_VSI_T_ESW,
+ SXE2_VSI_T_RDMA,
+ SXE2_VSI_T_DPDK_PF,
+ SXE2_VSI_T_DPDK_VF,
+ SXE2_VSI_T_DPDK_ESW,
+ SXE2_VSI_T_NR,
+};
+
+struct sxe2_queue_info {
+ uint16_t base_idx_in_nic;
+ uint16_t base_idx_in_func;
+ uint16_t q_cnt;
+ uint16_t depth;
+ uint16_t rx_buf_len;
+ uint16_t max_frame_len;
+ struct sxe2_queue **queues;
+};
+
+struct sxe2_vsi_irqs {
+ uint16_t avail_cnt;
+ uint16_t used_cnt;
+ uint16_t base_idx_in_pf;
+};
+
+enum {
+ sxe2_VSI_DOWN = 0,
+ sxe2_VSI_CLOSE,
+ sxe2_VSI_DISABLE,
+ sxe2_VSI_MAX,
+};
+
+struct sxe2_stats {
+ uint64_t ipackets;
+
+ uint64_t opackets;
+
+ uint64_t ibytes;
+
+ uint64_t obytes;
+
+ uint64_t ierrors;
+
+ uint64_t imissed;
+
+ uint64_t rx_out_of_buffer;
+ uint64_t rx_qblock_drop;
+
+ uint64_t tx_frame_good;
+ uint64_t rx_frame_good;
+ uint64_t rx_crc_errors;
+ uint64_t tx_bytes_good;
+ uint64_t rx_bytes_good;
+ uint64_t tx_multicast_good;
+ uint64_t tx_broadcast_good;
+ uint64_t rx_multicast_good;
+ uint64_t rx_broadcast_good;
+ uint64_t rx_len_errors;
+ uint64_t rx_out_of_range_errors;
+ uint64_t rx_oversize_pkts_phy;
+ uint64_t rx_symbol_err;
+ uint64_t rx_pause_frame;
+ uint64_t tx_pause_frame;
+
+ uint64_t rx_discards_phy;
+ uint64_t rx_discards_ips_phy;
+
+ uint64_t tx_dropped_link_down;
+ uint64_t rx_undersize_good;
+ uint64_t rx_runt_error;
+ uint64_t tx_bytes_good_bad;
+ uint64_t tx_frame_good_bad;
+ uint64_t rx_jabbers;
+ uint64_t rx_size_64;
+ uint64_t rx_size_65_127;
+ uint64_t rx_size_128_255;
+ uint64_t rx_size_256_511;
+ uint64_t rx_size_512_1023;
+ uint64_t rx_size_1024_1522;
+ uint64_t rx_size_1523_max;
+ uint64_t rx_pcs_symbol_err_phy;
+ uint64_t rx_corrected_bits_phy;
+ uint64_t rx_err_lane_0_phy;
+ uint64_t rx_err_lane_1_phy;
+ uint64_t rx_err_lane_2_phy;
+ uint64_t rx_err_lane_3_phy;
+
+ uint64_t rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY];
+ uint64_t rx_illegal_bytes;
+ uint64_t rx_oversize_good;
+ uint64_t tx_unicast;
+ uint64_t tx_broadcast;
+ uint64_t tx_multicast;
+ uint64_t tx_vlan_packet_good;
+ uint64_t tx_size_64;
+ uint64_t tx_size_65_127;
+ uint64_t tx_size_128_255;
+ uint64_t tx_size_256_511;
+ uint64_t tx_size_512_1023;
+ uint64_t tx_size_1024_1522;
+ uint64_t tx_size_1523_max;
+ uint64_t tx_underflow_error;
+ uint64_t rx_byte_good_bad;
+ uint64_t rx_frame_good_bad;
+ uint64_t rx_unicast_good;
+ uint64_t rx_vlan_packets;
+
+ uint64_t prio_xoff_rx[SXE2_MAX_USER_PRIORITY];
+ uint64_t prio_xon_rx[SXE2_MAX_USER_PRIORITY];
+ uint64_t prio_xon_tx[SXE2_MAX_USER_PRIORITY];
+ uint64_t prio_xoff_tx[SXE2_MAX_USER_PRIORITY];
+ uint64_t prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY];
+
+ uint64_t rx_vsi_unicast_packets;
+ uint64_t rx_vsi_bytes;
+ uint64_t tx_vsi_unicast_packets;
+ uint64_t tx_vsi_bytes;
+ uint64_t rx_vsi_multicast_packets;
+ uint64_t tx_vsi_multicast_packets;
+ uint64_t rx_vsi_broadcast_packets;
+ uint64_t tx_vsi_broadcast_packets;
+
+ uint64_t rx_sw_unicast_packets;
+ uint64_t rx_sw_broadcast_packets;
+ uint64_t rx_sw_multicast_packets;
+ uint64_t rx_sw_drop_packets;
+ uint64_t rx_sw_drop_bytes;
+};
+
+struct sxe2_vsi_stats {
+ struct sxe2_stats vsi_sw_stats;
+ struct sxe2_stats vsi_sw_stats_prev;
+ struct sxe2_stats vsi_hw_stats;
+ struct sxe2_stats stats;
+};
+
+struct sxe2_vsi {
+ TAILQ_ENTRY(sxe2_vsi) next;
+ struct sxe2_adapter *adapter;
+ uint16_t vsi_id;
+ uint16_t vsi_type;
+ struct sxe2_vsi_irqs irqs;
+ struct sxe2_queue_info txqs;
+ struct sxe2_queue_info rxqs;
+ uint16_t budget;
+ struct sxe2_vsi_stats vsi_stats;
+};
+
+TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi);
+
+struct sxe2_vsi_context {
+ uint16_t func_id;
+ uint16_t dpdk_vsi_id;
+ uint16_t kernel_vsi_id;
+ uint16_t vsi_type;
+
+ uint16_t bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT];
+ uint16_t bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT];
+
+ struct sxe2_vsi *main_vsi;
+};
+
+void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter,
+ struct sxe2_drv_vsi_caps *vsi_caps);
+
+int32_t sxe2_vsi_init(struct rte_eth_dev *dev);
+
+void sxe2_vsi_uninit(struct rte_eth_dev *dev);
+
+#endif /* __SXE2_VSI_H__ */
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 06/11] drivers: support PCI BAR mapping
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (4 preceding siblings ...)
2026-05-16 2:55 ` [PATCH v14 05/11] drivers: add base driver probe skeleton liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 07/11] common/sxe2: add ioctl interface for DMA map and unmap liujie5
` (4 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Implement PCI BAR (Base Address Register) mapping and unmapping
logic to enable MMIO (Memory Mapped I/O) access to hardware
registers.
The driver retrieves the BAR0 virtual address from the PCI resource
during the probing phase. This mapping is used for subsequent
register-level operations. Proper cleanup is implemented in the
device close path.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/sxe2_common.c | 8 +-
drivers/common/sxe2/sxe2_ioctl_chnl.c | 38 ++-
drivers/net/sxe2/sxe2_ethdev.c | 326 +++++++++++++++++++++++++-
drivers/net/sxe2/sxe2_ethdev.h | 18 ++
4 files changed, 381 insertions(+), 9 deletions(-)
diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c
index 5cf43dd3b7..c309a890ad 100644
--- a/drivers/common/sxe2/sxe2_common.c
+++ b/drivers/common/sxe2/sxe2_common.c
@@ -185,7 +185,7 @@ static int32_t sxe2_common_device_setup(struct sxe2_common_device *cdev)
ret = sxe2_drv_dev_handshke(cdev);
if (ret != 0) {
- PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret);
+ PMD_LOG_ERR(COM, "handshake failed, ret=%d", ret);
goto l_close_dev;
}
@@ -605,18 +605,18 @@ static void sxe2_common_pci_init(void)
return;
}
-static bool sxe2_commoin_inited;
+static bool sxe2_common_inited;
RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init)
void
sxe2_common_init(void)
{
- if (sxe2_commoin_inited)
+ if (sxe2_common_inited)
goto l_end;
pthread_mutex_init(&sxe2_common_devices_list_lock, NULL);
sxe2_common_pci_init();
- sxe2_commoin_inited = true;
+ sxe2_common_inited = true;
l_end:
return;
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c
index 11e24d04d9..48acc41509 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl.c
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c
@@ -133,7 +133,7 @@ sxe2_drv_dev_handshke(struct sxe2_common_device *cdev)
goto l_end;
}
- PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd);
+ PMD_LOG_DEBUG(COM, "Open fd=%d to handshake with kernel", cmd_fd);
memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr));
cmd_params.dpdk_ver = SXE2_COM_VER;
@@ -142,7 +142,7 @@ sxe2_drv_dev_handshke(struct sxe2_common_device *cdev)
(void)pthread_mutex_lock(&cdev->config.lock);
ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params);
if (ret < 0) {
- PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s",
+ PMD_LOG_ERR(COM, "Failed to handshake, fd=%d, ret=%d, err:%s",
cmd_fd, ret, strerror(errno));
ret = -errno;
(void)pthread_mutex_unlock(&cdev->config.lock);
@@ -159,6 +159,40 @@ sxe2_drv_dev_handshke(struct sxe2_common_device *cdev)
return ret;
}
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap)
+void
+*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, uint8_t bar_idx, uint64_t len, uint64_t offset)
+{
+ int32_t cmd_fd = 0;
+ void *virt = NULL;
+
+ if (cdev->config.kernel_reset) {
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_err;
+ }
+
+ cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev);
+ if (cmd_fd < 0) {
+ PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd);
+ goto l_err;
+ }
+
+ PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"",
+ bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset));
+
+ virt = mmap(NULL, len, PROT_READ | PROT_WRITE,
+ MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset));
+ if (virt == MAP_FAILED) {
+ PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s",
+ cmd_fd, len, offset, strerror(errno));
+ goto l_err;
+ }
+
+ return virt;
+l_err:
+ return NULL;
+}
+
RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap)
int32_t
sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len)
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
index f0bdda38a7..204add9c98 100644
--- a/drivers/net/sxe2/sxe2_ethdev.c
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -54,6 +54,27 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = {
{ .vendor_id = 0, },
};
+static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = {
+ [SXE2_PCI_MAP_RES_INVALID] = {.addr_base = 0,
+ .bar_idx = 0,
+ .reg_width = 0},
+ [SXE2_PCI_MAP_RES_DOORBELL_TX] = {.addr_base = SXE2_TXQ_LEGACY_DBLL(0),
+ .bar_idx = 0,
+ .reg_width = 4},
+ [SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL] = {.addr_base = SXE2_RXQ_TAIL(0),
+ .bar_idx = 0,
+ .reg_width = 4},
+ [SXE2_PCI_MAP_RES_IRQ_DYN] = {.addr_base = SXE2_VF_DYN_CTL(0),
+ .bar_idx = 0,
+ .reg_width = 4},
+ [SXE2_PCI_MAP_RES_IRQ_ITR] = {.addr_base = SXE2_VF_INT_ITR(0, 0),
+ .bar_idx = 0,
+ .reg_width = 4},
+ [SXE2_PCI_MAP_RES_IRQ_MSIX] = {.addr_base = SXE2_BAR4_MSIX_CTL(0),
+ .bar_idx = 4,
+ .reg_width = 10},
+};
+
static int32_t sxe2_dev_configure(struct rte_eth_dev *dev)
{
int32_t ret = 0;
@@ -151,6 +172,7 @@ static int32_t sxe2_dev_close(struct rte_eth_dev *dev)
(void)sxe2_dev_stop(dev);
sxe2_vsi_uninit(dev);
+ sxe2_dev_pci_map_uinit(dev);
return 0;
}
@@ -287,6 +309,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = {
.dev_infos_get = sxe2_dev_infos_get,
};
+struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type)
+{
+ struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt;
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ uint8_t bar_idx = SXE2_PCI_MAP_BAR_INVALID;
+ uint8_t i;
+
+ bar_idx = map_ctxt->addr_info[res_type].bar_idx;
+ if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) {
+ PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type);
+ goto l_end;
+ }
+
+ for (i = 0; i < map_ctxt->bar_cnt; i++) {
+ if (bar_idx == map_ctxt->bar_info[i].bar_idx) {
+ bar_info = &map_ctxt->bar_info[i];
+ break;
+ }
+ }
+
+l_end:
+ return bar_info;
+}
+
static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter,
struct sxe2_drv_dev_caps_resp *dev_caps)
{
@@ -354,6 +401,69 @@ static int32_t sxe2_dev_caps_get(struct sxe2_adapter *adapter)
return ret;
}
+int32_t sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type, uint64_t org_len, uint64_t org_offset)
+{
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ struct sxe2_pci_map_segment_info *seg_info = NULL;
+ void *map_addr = NULL;
+ int32_t ret = 0;
+ size_t page_size = 0;
+ size_t aligned_len = 0;
+ size_t page_inner_offset = 0;
+ off_t aligned_offset = 0;
+ uint8_t i = 0;
+
+ if (org_len == 0) {
+ PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0");
+ ret = -EFAULT;
+ goto l_end;
+ }
+
+ bar_info = sxe2_dev_get_bar_info(adapter, res_type);
+ if (!bar_info) {
+ PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type);
+ ret = -EFAULT;
+ goto l_end;
+ }
+ seg_info = bar_info->seg_info;
+
+ page_size = rte_mem_page_size();
+
+ aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size);
+ page_inner_offset = org_offset - aligned_offset;
+ aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size);
+
+ map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx,
+ aligned_len, aligned_offset);
+ if (!map_addr) {
+ PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64
+ ", offset=%" PRIu64 ", page_size=%zu",
+ res_type, org_len, org_offset, page_size);
+ ret = -EFAULT;
+ goto l_end;
+ }
+
+ for (i = 0; i < bar_info->map_cnt; i++) {
+ if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID)
+ continue;
+ seg_info[i].type = res_type;
+ seg_info[i].addr = map_addr;
+ seg_info[i].page_inner_offset = page_inner_offset;
+ seg_info[i].len = aligned_len;
+ break;
+ }
+ if (i == bar_info->map_cnt) {
+ PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type);
+ ret = -ENOMEM;
+ sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len);
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
+
static int32_t sxe2_hw_init(struct rte_eth_dev *dev)
{
struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
@@ -368,6 +478,55 @@ static int32_t sxe2_hw_init(struct rte_eth_dev *dev)
return ret;
}
+int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, uint32_t res_type,
+ uint32_t item_cnt, uint32_t item_base)
+{
+ struct sxe2_pci_map_addr_info *addr_info = NULL;
+ int32_t ret = 0;
+
+ addr_info = &adapter->map_ctxt.addr_info[res_type];
+ if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) {
+ PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type);
+ ret = -EFAULT;
+ goto l_end;
+ }
+
+ ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width,
+ addr_info->addr_base + item_base * addr_info->reg_width);
+ if (ret != 0) {
+ PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type);
+ goto l_end;
+ }
+l_end:
+ return ret;
+}
+
+void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, uint32_t res_type)
+{
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ struct sxe2_pci_map_segment_info *seg_info = NULL;
+ uint32_t i = 0;
+
+ bar_info = sxe2_dev_get_bar_info(adapter, res_type);
+ if (bar_info == NULL) {
+ PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type);
+ goto l_end;
+ }
+ seg_info = bar_info->seg_info;
+
+ for (i = 0; i < bar_info->map_cnt; i++) {
+ if (res_type == seg_info[i].type) {
+ (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr,
+ seg_info[i].len);
+ memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info));
+ break;
+ }
+ }
+
+l_end:
+ return;
+}
+
static int32_t sxe2_dev_info_init(struct rte_eth_dev *dev)
{
struct sxe2_adapter *adapter =
@@ -408,6 +567,157 @@ static int32_t sxe2_dev_info_init(struct rte_eth_dev *dev)
return ret;
}
+int32_t sxe2_dev_pci_map_init(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+ struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt;
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ struct sxe2_pci_map_segment_info *seg_info = NULL;
+ uint16_t txq_cnt = adapter->q_ctxt.qp_cnt_assign;
+ uint16_t txq_base = adapter->q_ctxt.base_idx_in_pf;
+ uint16_t rxq_cnt = adapter->q_ctxt.qp_cnt_assign;
+ uint16_t irq_cnt = adapter->irq_ctxt.max_cnt_hw;
+ uint16_t irq_base = adapter->irq_ctxt.base_idx_in_func;
+ uint16_t rxq_base = adapter->q_ctxt.base_idx_in_pf;
+ int32_t ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ adapter->dev_info.dev_data = dev->data;
+
+ if (!pci_dev->mem_resource[0].phys_addr) {
+ PMD_LOG_ERR(INIT, "Physical address not scanned");
+ ret = -ENXIO;
+ goto l_end;
+ }
+
+ map_ctxt->bar_cnt = 2;
+
+ bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0);
+ if (!bar_info) {
+ PMD_LOG_ERR(INIT, "Failed to alloc bar_info");
+ ret = -ENOMEM;
+ goto l_end;
+ }
+ bar_info[0].bar_idx = 0;
+ bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT;
+ seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0);
+ if (!seg_info) {
+ PMD_LOG_ERR(INIT, "Failed to alloc seg_info");
+ ret = -ENOMEM;
+ goto l_free_bar;
+ }
+
+ bar_info[0].seg_info = seg_info;
+
+ bar_info[1].bar_idx = 4;
+ bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT;
+ seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0);
+ if (!seg_info) {
+ PMD_LOG_ERR(INIT, "Failed to alloc seg_info");
+ ret = -ENOMEM;
+ goto l_free_seg0;
+ }
+
+ bar_info[1].seg_info = seg_info;
+ map_ctxt->bar_info = bar_info;
+
+ map_ctxt->addr_info = sxe2_net_map_addr_info_pf;
+
+ ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX,
+ txq_cnt, txq_base);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret);
+ goto l_free_seg1;
+ }
+
+ ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL,
+ rxq_cnt, rxq_base);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret);
+ goto l_free_txq;
+ }
+
+ ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN,
+ irq_cnt, irq_base);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret);
+ goto l_free_rxq_tail;
+ }
+
+ ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR,
+ irq_cnt, irq_base);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret);
+ goto l_free_irq_dyn;
+ }
+
+ ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX,
+ irq_cnt, irq_base);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret);
+ goto l_free_irq_itr;
+ }
+ goto l_end;
+
+l_free_irq_itr:
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR);
+l_free_irq_dyn:
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN);
+l_free_rxq_tail:
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL);
+l_free_txq:
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX);
+l_free_seg1:
+ if (bar_info[1].seg_info) {
+ rte_free(bar_info[1].seg_info);
+ bar_info[1].seg_info = NULL;
+ }
+l_free_seg0:
+ if (bar_info[0].seg_info) {
+ rte_free(bar_info[0].seg_info);
+ bar_info[0].seg_info = NULL;
+ }
+l_free_bar:
+ if (bar_info) {
+ rte_free(bar_info);
+ bar_info = NULL;
+ }
+l_end:
+ return ret;
+}
+
+void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt;
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ uint8_t i = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL);
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX);
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN);
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR);
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX);
+
+ if (map_ctxt != NULL && map_ctxt->bar_info != NULL) {
+ for (i = 0; i < map_ctxt->bar_cnt; i++) {
+ bar_info = &map_ctxt->bar_info[i];
+ if (bar_info != NULL && bar_info->seg_info != NULL) {
+ rte_free(bar_info->seg_info);
+ bar_info->seg_info = NULL;
+ }
+ }
+ rte_free(map_ctxt->bar_info);
+ map_ctxt->bar_info = NULL;
+ }
+
+ adapter->dev_info.dev_data = NULL;
+}
+
static int32_t sxe2_dev_init(struct rte_eth_dev *dev,
struct sxe2_dev_kvargs_info *kvargs __rte_unused)
{
@@ -426,6 +736,12 @@ static int32_t sxe2_dev_init(struct rte_eth_dev *dev,
goto l_end;
}
+ ret = sxe2_dev_pci_map_init(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret);
+ goto l_end;
+ }
+
ret = sxe2_vsi_init(dev);
if (ret) {
PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret);
@@ -548,8 +864,10 @@ static int32_t sxe2_parse_eth_devargs(struct rte_device *dev,
memset(eth_da, 0, sizeof(*eth_da));
if (dev->devargs->cls_str) {
- ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1);
- if (ret != 0) {
+ ret = rte_eth_devargs_parse(dev->devargs->cls_str,
+ eth_da,
+ 1);
+ if (ret) {
PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s",
dev->devargs->cls_str);
return -rte_errno;
@@ -557,7 +875,9 @@ static int32_t sxe2_parse_eth_devargs(struct rte_device *dev,
}
if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) {
- ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1);
+ ret = rte_eth_devargs_parse(dev->devargs->args,
+ eth_da,
+ 1);
if (ret) {
PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s",
dev->devargs->args);
diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h
index de6d2e9d6c..e599a40d4f 100644
--- a/drivers/net/sxe2/sxe2_ethdev.h
+++ b/drivers/net/sxe2/sxe2_ethdev.h
@@ -290,4 +290,22 @@ struct sxe2_adapter {
#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \
((struct sxe2_adapter *)(dev)->data->dev_private)
+#define SXE2_DEV_TO_PCI(eth_dev) \
+ RTE_DEV_TO_PCI((eth_dev)->device)
+
+struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type);
+
+int32_t sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type, uint64_t org_len, uint64_t org_offset);
+
+int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, uint32_t res_type,
+ uint32_t item_cnt, uint32_t item_base);
+
+void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, uint32_t res_type);
+
+int32_t sxe2_dev_pci_map_init(struct rte_eth_dev *dev);
+
+void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev);
+
#endif /* __SXE2_ETHDEV_H__ */
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 07/11] common/sxe2: add ioctl interface for DMA map and unmap
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (5 preceding siblings ...)
2026-05-16 2:55 ` [PATCH v14 06/11] drivers: support PCI BAR mapping liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 08/11] net/sxe2: support queue setup and control liujie5
` (3 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Implement DMA mapping and unmapping functionality using ioctl
calls. This allows the driver to configure the hardware's IOMMU/DMA
tables, ensuring the device can safely access memory buffers
allocated by the userspace.
The mapping is established during device initialization or queue
setup and is revoked during device closure to prevent memory
leaks and ensure hardware security.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/sxe2_common.c | 50 +++++++++-
drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++
drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++
3 files changed, 162 insertions(+), 1 deletion(-)
diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c
index c309a890ad..f394c832d7 100644
--- a/drivers/common/sxe2/sxe2_common.c
+++ b/drivers/common/sxe2/sxe2_common.c
@@ -442,7 +442,7 @@ static int32_t sxe2_common_pci_remove(struct rte_pci_device *pci_dev)
cdev = sxe2_rtedev_to_cdev(&pci_dev->device);
if (cdev == NULL) {
ret = -ENODEV;
- PMD_LOG_ERR(COM, "Fail to get remove device.");
+ PMD_LOG_ERR(COM, "Fail to get device when remove.");
goto l_end;
}
@@ -466,12 +466,60 @@ static int32_t sxe2_common_pci_remove(struct rte_pci_device *pci_dev)
return ret;
}
+static int32_t sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev,
+ void *addr, uint64_t iova, size_t len)
+{
+ struct sxe2_common_device *cdev;
+ int32_t ret = -1;
+
+ cdev = sxe2_rtedev_to_cdev(&pci_dev->device);
+ if (cdev == NULL) {
+ ret = -ENODEV;
+ PMD_LOG_ERR(COM, "Fail to get device when dma map.");
+ goto l_end;
+ }
+
+ ret = sxe2_drv_dev_dma_map(cdev, (uint64_t)(uintptr_t)addr, iova, len);
+ if (ret) {
+ PMD_LOG_ERR(COM, "Fail to map dma map, ret=%d", ret);
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev,
+ void *addr __rte_unused, uint64_t iova, size_t len __rte_unused)
+{
+ struct sxe2_common_device *cdev;
+ int32_t ret = -1;
+
+ cdev = sxe2_rtedev_to_cdev(&pci_dev->device);
+ if (cdev == NULL) {
+ ret = -ENODEV;
+ PMD_LOG_ERR(COM, "Fail to get device when dma unmap.");
+ goto l_end;
+ }
+
+ ret = sxe2_drv_dev_dma_unmap(cdev, iova);
+ if (ret) {
+ PMD_LOG_ERR(COM, "Fail to unmap dma map, ret=%d", ret);
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
+
static struct rte_pci_driver sxe2_common_pci_driver = {
.driver = {
.name = SXE2_COMMON_PCI_DRIVER_NAME,
},
.probe = sxe2_common_pci_probe,
.remove = sxe2_common_pci_remove,
+ .dma_map = sxe2_common_pci_dma_map,
+ .dma_unmap = sxe2_common_pci_dma_unmap,
};
static uint32_t sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table)
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c
index 48acc41509..08df9373d7 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl.c
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c
@@ -219,3 +219,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len)
l_end:
return ret;
}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map)
+int32_t
+sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, uint64_t vaddr,
+ uint64_t iova, uint64_t size)
+{
+ struct sxe2_ioctl_iommu_dma_map cmd_params;
+ enum rte_iova_mode iova_mode;
+ int32_t ret = 0;
+ int32_t cmd_fd = 0;
+
+ if (cdev->config.kernel_reset) {
+ ret = -EPERM;
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_end;
+ }
+
+ iova_mode = rte_eal_iova_mode();
+ if (iova_mode == RTE_IOVA_PA) {
+ if (cdev->config.support_iommu) {
+ PMD_LOG_ERR(COM, "iommu not support pa mode");
+ ret = -EIO;
+ }
+ goto l_end;
+ } else if (iova_mode == RTE_IOVA_VA) {
+ if (!cdev->config.support_iommu) {
+ PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode.");
+ ret = -EIO;
+ goto l_end;
+ }
+ }
+
+ cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev);
+ if (cmd_fd < 0) {
+ ret = -EBADF;
+ PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd);
+ goto l_end;
+ }
+
+ memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map));
+ cmd_params.vaddr = vaddr;
+ cmd_params.iova = iova;
+ cmd_params.size = size;
+
+ (void)pthread_mutex_lock(&cdev->config.lock);
+ ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s",
+ cmd_fd, ret, strerror(errno));
+ ret = -EIO;
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+ goto l_end;
+ }
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+
+l_end:
+ return ret;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap)
+int32_t
+sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, uint64_t iova)
+{
+ int32_t ret = 0;
+ int32_t cmd_fd = 0;
+ struct sxe2_ioctl_iommu_dma_unmap cmd_params;
+
+ if (cdev->config.kernel_reset) {
+ ret = -EPERM;
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_end;
+ }
+
+ if (!cdev->config.support_iommu)
+ goto l_end;
+
+ cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev);
+ if (cmd_fd < 0) {
+ ret = -EBADF;
+ PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd);
+ goto l_end;
+ }
+
+ PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"",
+ cmd_fd, iova);
+
+ memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap));
+ cmd_params.iova = iova;
+
+ (void)pthread_mutex_lock(&cdev->config.lock);
+ ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params);
+ if (ret < 0) {
+ PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s",
+ cmd_fd, ret, strerror(errno));
+ ret = -EIO;
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+ goto l_end;
+ }
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+
+l_end:
+ return ret;
+}
+
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
index 710ca1a8d0..ed9ceb1f5a 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
@@ -46,6 +46,15 @@ __rte_internal
int32_t
sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len);
+__rte_internal
+int32_t
+sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, uint64_t vaddr,
+ uint64_t iova, uint64_t size);
+
+__rte_internal
+int32_t
+sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, uint64_t iova);
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 08/11] net/sxe2: support queue setup and control
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (6 preceding siblings ...)
2026-05-16 2:55 ` [PATCH v14 07/11] common/sxe2: add ioctl interface for DMA map and unmap liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 09/11] drivers: add data path for Rx and Tx liujie5
` (2 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Add support for Rx and Tx queue setup, release, and management.
Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup,
rx_queue_release, and tx_queue_release.
This includes:
- Allocating memory for hardware ring descriptors.
- Initializing software ring structures and hardware head/tail pointers.
- Implementing proper resource cleanup logic to prevent memory leaks
during queue reconfiguration or device close.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/net/sxe2/meson.build | 2 +
drivers/net/sxe2/sxe2_ethdev.c | 79 +++--
drivers/net/sxe2/sxe2_ethdev.h | 15 +-
drivers/net/sxe2/sxe2_rx.c | 559 +++++++++++++++++++++++++++++++++
drivers/net/sxe2/sxe2_rx.h | 34 ++
drivers/net/sxe2/sxe2_tx.c | 420 +++++++++++++++++++++++++
drivers/net/sxe2/sxe2_tx.h | 32 ++
7 files changed, 1115 insertions(+), 26 deletions(-)
create mode 100644 drivers/net/sxe2/sxe2_rx.c
create mode 100644 drivers/net/sxe2/sxe2_rx.h
create mode 100644 drivers/net/sxe2/sxe2_tx.c
create mode 100644 drivers/net/sxe2/sxe2_tx.h
diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build
index 00c38b147c..3dfe54903a 100644
--- a/drivers/net/sxe2/meson.build
+++ b/drivers/net/sxe2/meson.build
@@ -18,6 +18,8 @@ sources += files(
'sxe2_cmd_chnl.c',
'sxe2_vsi.c',
'sxe2_queue.c',
+ 'sxe2_tx.c',
+ 'sxe2_rx.c',
)
allow_internal_get_api = true
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
index 204add9c98..acbfcafd63 100644
--- a/drivers/net/sxe2/sxe2_ethdev.c
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -24,6 +24,8 @@
#include "sxe2_ethdev.h"
#include "sxe2_drv_cmd.h"
#include "sxe2_cmd_chnl.h"
+#include "sxe2_tx.h"
+#include "sxe2_rx.h"
#include "sxe2_common.h"
#include "sxe2_common_log.h"
#include "sxe2_host_regs.h"
@@ -86,14 +88,6 @@ static int32_t sxe2_dev_configure(struct rte_eth_dev *dev)
return ret;
}
-static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused)
-{
-}
-
-static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused)
-{
-}
-
static int32_t sxe2_dev_stop(struct rte_eth_dev *dev)
{
int32_t ret = 0;
@@ -112,16 +106,6 @@ static int32_t sxe2_dev_stop(struct rte_eth_dev *dev)
return ret;
}
-static int32_t __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused)
-{
- return 0;
-}
-
-static int32_t __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused)
-{
- return 0;
-}
-
static int32_t sxe2_queues_start(struct rte_eth_dev *dev)
{
int32_t ret = 0;
@@ -307,10 +291,18 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = {
.dev_stop = sxe2_dev_stop,
.dev_close = sxe2_dev_close,
.dev_infos_get = sxe2_dev_infos_get,
+
+ .rx_queue_setup = sxe2_rx_queue_setup,
+ .tx_queue_setup = sxe2_tx_queue_setup,
+ .rx_queue_release = sxe2_rx_queue_release,
+ .tx_queue_release = sxe2_tx_queue_release,
+
+ .rxq_info_get = sxe2_rx_queue_info_get,
+ .txq_info_get = sxe2_tx_queue_info_get,
};
struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
- enum sxe2_pci_map_resource res_type)
+ enum sxe2_pci_map_resource res_type)
{
struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt;
struct sxe2_pci_map_bar_info *bar_info = NULL;
@@ -334,6 +326,45 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter
return bar_info;
}
+void *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type,
+ uint16_t idx_in_func)
+{
+ struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt;
+ struct sxe2_pci_map_segment_info *seg_info = NULL;
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ void *addr = NULL;
+ uint8_t reg_width = 0;
+ uint8_t i = 0;
+
+ bar_info = sxe2_dev_get_bar_info(adapter, res_type);
+ if (bar_info == NULL) {
+ PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]",
+ res_type);
+ goto l_end;
+ }
+ seg_info = bar_info->seg_info;
+
+ reg_width = map_ctxt->addr_info[res_type].reg_width;
+ if (reg_width == 0) {
+ PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d",
+ res_type);
+ goto l_end;
+ }
+
+ for (i = 0; i < bar_info->map_cnt; i++) {
+ seg_info = &bar_info->seg_info[i];
+ if (res_type == seg_info->type) {
+ addr = (void *)((uintptr_t)seg_info->addr +
+ seg_info->page_inner_offset + reg_width * idx_in_func);
+ goto l_end;
+ }
+ }
+
+l_end:
+ return addr;
+}
+
static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter,
struct sxe2_drv_dev_caps_resp *dev_caps)
{
@@ -402,7 +433,9 @@ static int32_t sxe2_dev_caps_get(struct sxe2_adapter *adapter)
}
int32_t sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter,
- enum sxe2_pci_map_resource res_type, uint64_t org_len, uint64_t org_offset)
+ enum sxe2_pci_map_resource res_type,
+ uint64_t org_len,
+ uint64_t org_offset)
{
struct sxe2_pci_map_bar_info *bar_info = NULL;
struct sxe2_pci_map_segment_info *seg_info = NULL;
@@ -478,8 +511,10 @@ static int32_t sxe2_hw_init(struct rte_eth_dev *dev)
return ret;
}
-int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, uint32_t res_type,
- uint32_t item_cnt, uint32_t item_base)
+int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter,
+ uint32_t res_type,
+ uint32_t item_cnt,
+ uint32_t item_base)
{
struct sxe2_pci_map_addr_info *addr_info = NULL;
int32_t ret = 0;
diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h
index e599a40d4f..cf40d81cd9 100644
--- a/drivers/net/sxe2/sxe2_ethdev.h
+++ b/drivers/net/sxe2/sxe2_ethdev.h
@@ -293,14 +293,21 @@ struct sxe2_adapter {
#define SXE2_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
+void *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type,
+ uint16_t idx_in_func);
+
struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
- enum sxe2_pci_map_resource res_type);
+ enum sxe2_pci_map_resource res_type);
int32_t sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter,
- enum sxe2_pci_map_resource res_type, uint64_t org_len, uint64_t org_offset);
+ enum sxe2_pci_map_resource res_type,
+ uint64_t org_len, uint64_t org_offset);
-int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, uint32_t res_type,
- uint32_t item_cnt, uint32_t item_base);
+int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter,
+ uint32_t res_type,
+ uint32_t item_cnt,
+ uint32_t item_base);
void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, uint32_t res_type);
diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c
new file mode 100644
index 0000000000..a04c1808a6
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_rx.c
@@ -0,0 +1,559 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+
+#include "sxe2_ethdev.h"
+#include "sxe2_queue.h"
+#include "sxe2_rx.h"
+#include "sxe2_cmd_chnl.h"
+
+#include "sxe2_osal.h"
+#include "sxe2_common_log.h"
+
+static void *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, uint16_t queue_id)
+{
+ return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL,
+ queue_id);
+}
+
+static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq)
+{
+ rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id);
+ SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0);
+}
+
+static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq)
+{
+ uint16_t i = 0;
+ uint16_t len = 0;
+ static const union sxe2_rx_desc zeroed_desc = {{0}};
+
+ len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM;
+ for (i = 0; i < len; ++i)
+ rxq->desc_ring[i] = zeroed_desc;
+
+ memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf));
+ for (i = rxq->ring_depth; i < len; i++)
+ rxq->buffer_ring[i] = &rxq->fake_mbuf;
+
+ rxq->hold_num = 0;
+ rxq->next_ret_pkt = 0;
+ rxq->processing_idx = 0;
+ rxq->completed_pkts_num = 0;
+ rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1;
+
+ rxq->pkt_first_seg = NULL;
+ rxq->pkt_last_seg = NULL;
+
+ rxq->realloc_num = 0;
+ rxq->realloc_start = 0;
+}
+
+void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq)
+{
+ uint16_t i;
+
+ if (rxq->buffer_ring != NULL) {
+ for (i = 0; i < rxq->ring_depth; i++) {
+ if (rxq->buffer_ring[i] != NULL) {
+ rte_pktmbuf_free(rxq->buffer_ring[i]);
+ rxq->buffer_ring[i] = NULL;
+ }
+ }
+ }
+
+ if (rxq->completed_pkts_num) {
+ for (i = 0; i < rxq->completed_pkts_num; ++i) {
+ if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) {
+ rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]);
+ rxq->completed_buf[rxq->next_ret_pkt + i] = NULL;
+ }
+ }
+ rxq->completed_pkts_num = 0;
+ }
+}
+
+const struct sxe2_rxq_ops sxe2_default_rxq_ops = {
+ .queue_reset = sxe2_rx_queue_reset,
+ .mbufs_release = sxe2_rx_queue_mbufs_release,
+};
+
+static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void)
+{
+ return sxe2_default_rxq_ops;
+}
+
+void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev,
+ uint16_t queue_id, struct rte_eth_rxq_info *qinfo)
+{
+ struct sxe2_rx_queue *rxq = NULL;
+
+ if (queue_id >= dev->data->nb_rx_queues) {
+ PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u",
+ queue_id, dev->data->nb_rx_queues);
+ goto end;
+ }
+
+ rxq = dev->data->rx_queues[queue_id];
+ if (rxq == NULL) {
+ PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id);
+ goto end;
+ }
+
+ qinfo->mp = rxq->mb_pool;
+ qinfo->nb_desc = rxq->ring_depth;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_drop_en = rxq->drop_en;
+ qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+
+end:
+ return;
+}
+
+int32_t __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_rx_queue *rxq;
+ int32_t ret;
+ PMD_INIT_FUNC_TRACE();
+
+ if (dev->data->rx_queue_state[rx_queue_id] ==
+ RTE_ETH_QUEUE_STATE_STOPPED) {
+ ret = 0;
+ goto l_end;
+ }
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+ if (rxq == NULL) {
+ ret = 0;
+ goto l_end;
+ }
+ ret = sxe2_drv_rxq_switch(adapter, rxq, false);
+ if (ret) {
+ PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d",
+ rx_queue_id, ret);
+ if (ret == -EPERM)
+ goto l_free;
+ goto l_end;
+ }
+
+l_free:
+ rxq->ops.mbufs_release(rxq);
+ rxq->ops.queue_reset(rxq);
+ dev->data->rx_queue_state[rx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STOPPED;
+l_end:
+ return ret;
+}
+
+static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq)
+{
+ if (rxq != NULL) {
+ rxq->ops.mbufs_release(rxq);
+ if (rxq->buffer_ring != NULL) {
+ rte_free(rxq->buffer_ring);
+ rxq->buffer_ring = NULL;
+ }
+ rte_memzone_free(rxq->mz);
+ rte_free(rxq);
+ }
+}
+
+void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev,
+ uint16_t queue_idx)
+{
+ (void)sxe2_rx_queue_stop(dev, queue_idx);
+ sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]);
+ dev->data->rx_queues[queue_idx] = NULL;
+}
+
+void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ uint16_t nb_rxq;
+
+ for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+ if (data->rx_queues[nb_rxq] == NULL)
+ continue;
+ sxe2_rx_queue_release(dev, nb_rxq);
+ data->rx_queues[nb_rxq] = NULL;
+ }
+ data->nb_rx_queues = 0;
+}
+
+static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t ring_depth, uint32_t socket_id)
+{
+ struct sxe2_rx_queue *rxq;
+ const struct rte_memzone *tz;
+ uint16_t len;
+
+ if (dev->data->rx_queues[queue_idx] != NULL) {
+ sxe2_rx_queue_release(dev, queue_idx);
+ dev->data->rx_queues[queue_idx] = NULL;
+ }
+
+ rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq),
+ RTE_CACHE_LINE_SIZE, socket_id);
+
+ if (rxq == NULL) {
+ PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx);
+ goto l_end;
+ }
+
+ rxq->ring_depth = ring_depth;
+ len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM;
+
+ rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring",
+ sizeof(struct rte_mbuf *) * len,
+ RTE_CACHE_LINE_SIZE, socket_id);
+
+ if (!rxq->buffer_ring) {
+ PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed");
+ rte_free(rxq);
+ rxq = NULL;
+ goto l_end;
+ }
+
+ tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx,
+ SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id);
+ if (tz == NULL) {
+ PMD_LOG_ERR(RX, "Rxq malloc desc mem failed");
+ rte_free(rxq->buffer_ring);
+ rxq->buffer_ring = NULL;
+ rte_free(rxq);
+ rxq = NULL;
+ goto l_end;
+ }
+
+ rxq->mz = tz;
+ memset(tz->addr, 0, SXE2_RX_RING_SIZE);
+ rxq->base_addr = tz->iova;
+ rxq->desc_ring = (union sxe2_rx_desc *)tz->addr;
+
+l_end:
+ return rxq;
+}
+
+int32_t __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t nb_desc, uint32_t socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi;
+ struct sxe2_rx_queue *rxq;
+ uint64_t offloads;
+ int32_t ret;
+ uint16_t rx_nseg;
+ uint16_t i;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 ||
+ nb_desc > SXE2_MAX_RING_DESC ||
+ nb_desc < SXE2_MIN_RING_DESC) {
+ PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if (mp != NULL)
+ rx_nseg = 1;
+ else
+ rx_nseg = rx_conf->rx_nseg;
+
+ offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+ if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
+ PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u",
+ dev->data->port_id, queue_idx, rx_nseg);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) {
+ PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u",
+ dev->data->port_id, queue_idx, rx_nseg);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
+ (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
+ PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.",
+ dev->data->port_id, queue_idx);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id);
+ if (rxq == NULL) {
+ PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx);
+ ret = -ENOMEM;
+ goto l_end;
+ }
+
+ if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
+ dev->data->lro = 1;
+
+ if (rx_nseg > 1) {
+ for (i = 0; i < rx_nseg; i++) {
+ rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split,
+ sizeof(struct rte_eth_rxseg_split));
+ }
+ rxq->mb_pool = rxq->rx_seg[0].mp;
+ } else {
+ rxq->mb_pool = mp;
+ }
+
+ rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+ rxq->port_id = dev->data->port_id;
+ rxq->offloads = offloads;
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
+
+ rxq->queue_id = queue_idx;
+ rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx;
+ rxq->drop_en = rx_conf->rx_drop_en;
+ rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+ rxq->vsi = vsi;
+ rxq->ops = sxe2_rx_default_ops_get();
+ rxq->ops.queue_reset(rxq);
+ dev->data->rx_queues[queue_idx] = rxq;
+
+ ret = 0;
+l_end:
+ return ret;
+}
+
+struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp)
+{
+ return rte_mbuf_raw_alloc(mp);
+}
+
+static int32_t __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq)
+{
+ struct rte_mbuf **buf_ring = rxq->buffer_ring;
+ struct rte_mbuf *mbuf = NULL;
+ struct rte_mbuf *mbuf_pay;
+ volatile union sxe2_rx_desc *desc;
+ uint64_t dma_addr;
+ int32_t ret;
+ uint16_t i, j;
+
+ for (i = 0; i < rxq->ring_depth; i++) {
+ mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool);
+ if (mbuf == NULL) {
+ PMD_LOG_ERR(RX, "Rx queue is not available or setup");
+ ret = -ENOMEM;
+ goto l_err_free_mbuf;
+ }
+
+ buf_ring[i] = mbuf;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->nb_segs = 1;
+ mbuf->port = rxq->port_id;
+
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+ desc = &rxq->desc_ring[i];
+ if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
+ mbuf->next = NULL;
+ desc->read.hdr_addr = 0;
+ desc->read.pkt_addr = dma_addr;
+ } else {
+ mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp);
+ if (unlikely(!mbuf_pay)) {
+ PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX");
+ ret = -ENOMEM;
+ goto l_err_free_mbuf;
+ }
+
+ mbuf_pay->next = NULL;
+ mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf_pay->nb_segs = 1;
+ mbuf_pay->port = rxq->port_id;
+ mbuf->next = mbuf_pay;
+
+ desc->read.hdr_addr = dma_addr;
+ desc->read.pkt_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay));
+ }
+
+#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC
+ desc->read.rsvd1 = 0;
+ desc->read.rsvd2 = 0;
+#endif
+ }
+
+ ret = 0;
+ goto l_end;
+
+l_err_free_mbuf:
+ for (j = 0; j <= i; j++) {
+ if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) {
+ rte_pktmbuf_free(buf_ring[j]->next);
+ buf_ring[j]->next = NULL;
+ }
+
+ if (buf_ring[j] != NULL) {
+ rte_pktmbuf_free(buf_ring[j]);
+ buf_ring[j] = NULL;
+ }
+ }
+
+l_end:
+ return ret;
+}
+
+int32_t __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct sxe2_rx_queue *rxq;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ int32_t ret;
+ PMD_INIT_FUNC_TRACE();
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+ if (rxq == NULL) {
+ PMD_LOG_ERR(RX, "Rx queue %u is not available or setup",
+ rx_queue_id);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if (dev->data->rx_queue_state[rx_queue_id] ==
+ RTE_ETH_QUEUE_STATE_STARTED) {
+ ret = 0;
+ goto l_end;
+ }
+
+ ret = sxe2_rx_queue_mbufs_alloc(rxq);
+ if (ret) {
+ PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail",
+ rx_queue_id);
+ ret = -ENOMEM;
+ goto l_end;
+ }
+
+ sxe2_rx_head_tail_init(adapter, rxq);
+
+ ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1);
+ if (ret) {
+ PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d",
+ rx_queue_id, ret);
+
+ (void)sxe2_drv_rxq_switch(adapter, rxq, false);
+ rxq->ops.mbufs_release(rxq);
+ rxq->ops.queue_reset(rxq);
+ goto l_end;
+ }
+
+ SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1);
+ dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+l_end:
+ return ret;
+}
+
+int32_t __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct sxe2_rx_queue *rxq;
+ uint16_t nb_rxq;
+ uint16_t nb_started_rxq;
+ int32_t ret;
+ PMD_INIT_FUNC_TRACE();
+
+ for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+ rxq = dev->data->rx_queues[nb_rxq];
+ if (!rxq || rxq->rx_deferred_start)
+ continue;
+
+ ret = sxe2_rx_queue_start(dev, nb_rxq);
+ if (ret) {
+ PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq);
+ goto l_free_started_queue;
+ }
+
+ rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0,
+ rte_memory_order_relaxed);
+ }
+ ret = 0;
+ goto l_end;
+
+l_free_started_queue:
+ for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++)
+ (void)sxe2_rx_queue_stop(dev, nb_started_rxq);
+l_end:
+ return ret;
+}
+
+void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi;
+ struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev;
+ struct sxe2_rx_queue *rxq = NULL;
+ int32_t ret;
+ uint16_t nb_rxq;
+ PMD_INIT_FUNC_TRACE();
+
+ for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+ ret = sxe2_rx_queue_stop(dev, nb_rxq);
+ if (ret) {
+ PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq);
+ continue;
+ }
+
+ rxq = dev->data->rx_queues[nb_rxq];
+ if (rxq) {
+ sw_stats_prev->ipackets +=
+ rte_atomic_load_explicit(&rxq->sw_stats.pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->ierrors +=
+ rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->ibytes +=
+ rte_atomic_load_explicit(&rxq->sw_stats.bytes,
+ rte_memory_order_relaxed);
+
+ sw_stats_prev->rx_sw_unicast_packets +=
+ rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->rx_sw_broadcast_packets +=
+ rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->rx_sw_multicast_packets +=
+ rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->rx_sw_drop_packets +=
+ rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->rx_sw_drop_bytes +=
+ rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes,
+ rte_memory_order_relaxed);
+ }
+ }
+}
diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h
new file mode 100644
index 0000000000..35fb1e3e44
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_rx.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_RX_H__
+#define __SXE2_RX_H__
+
+#include "sxe2_queue.h"
+
+int32_t __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t nb_desc, uint32_t socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp);
+
+int32_t __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq);
+
+void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
+
+void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev);
+
+void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+int32_t __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+int32_t __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev);
+
+void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev);
+
+struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp);
+
+#endif
diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c
new file mode 100644
index 0000000000..a05beb8c7a
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_tx.c
@@ -0,0 +1,420 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <ethdev_driver.h>
+#include "sxe2_tx.h"
+#include "sxe2_ethdev.h"
+#include "sxe2_common_log.h"
+#include "sxe2_cmd_chnl.h"
+
+static void *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, uint16_t queue_id)
+{
+ return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX,
+ queue_id);
+}
+
+static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq)
+{
+ txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id);
+ SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0);
+}
+
+void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq)
+{
+ uint16_t prev, i;
+ volatile union sxe2_tx_data_desc *txd;
+ static const union sxe2_tx_data_desc zeroed_desc = {{0}};
+ struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring;
+
+ for (i = 0; i < txq->ring_depth; i++)
+ txq->desc_ring[i] = zeroed_desc;
+
+ prev = txq->ring_depth - 1;
+ for (i = 0; i < txq->ring_depth; i++) {
+ txd = &txq->desc_ring[i];
+ if (txd == NULL)
+ continue;
+
+ txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE);
+ tx_buffer[i].mbuf = NULL;
+ tx_buffer[i].last_id = i;
+ tx_buffer[prev].next_id = i;
+ prev = i;
+ }
+
+ txq->desc_used_num = 0;
+ txq->desc_free_num = txq->ring_depth - 1;
+ txq->next_use = 0;
+ txq->next_clean = txq->ring_depth - 1;
+ txq->next_dd = txq->rs_thresh - 1;
+ txq->next_rs = txq->rs_thresh - 1;
+}
+
+void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq)
+{
+ uint32_t i;
+
+ if (txq != NULL && txq->buffer_ring != NULL) {
+ for (i = 0; i < txq->ring_depth; i++) {
+ if (txq->buffer_ring[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf);
+ txq->buffer_ring[i].mbuf = NULL;
+ }
+ }
+ }
+}
+
+static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq)
+{
+ if (txq != NULL && txq->buffer_ring != NULL)
+ rte_free(txq->buffer_ring);
+}
+
+const struct sxe2_txq_ops sxe2_default_txq_ops = {
+ .queue_reset = sxe2_tx_queue_reset,
+ .mbufs_release = sxe2_tx_queue_mbufs_release,
+ .buffer_ring_free = sxe2_tx_buffer_ring_free,
+};
+
+static struct sxe2_txq_ops sxe2_tx_default_ops_get(void)
+{
+ return sxe2_default_txq_ops;
+}
+
+static int32_t sxe2_txq_arg_validate(struct rte_eth_dev *dev, uint16_t ring_depth,
+ uint16_t *rs_thresh, uint16_t *free_thresh, const struct rte_eth_txconf *tx_conf)
+{
+ int32_t ret = 0;
+
+ if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 ||
+ ring_depth > SXE2_MAX_RING_DESC ||
+ ring_depth < SXE2_MIN_RING_DESC) {
+ PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ *free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+ tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+ *rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+ tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH);
+
+ if (*rs_thresh >= (ring_depth - 2)) {
+ PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number "
+ "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)",
+ *rs_thresh, dev->data->port_id);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if (*free_thresh >= (ring_depth - 3)) {
+ PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number "
+ "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)",
+ *free_thresh, dev->data->port_id);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if (*rs_thresh > *free_thresh) {
+ PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to "
+ "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)",
+ *free_thresh, *rs_thresh, dev->data->port_id);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if ((ring_depth % *rs_thresh) != 0) {
+ PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the "
+ "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)",
+ *rs_thresh, dev->data->port_id, ring_depth);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ ret = 0;
+
+l_end:
+ return ret;
+}
+
+void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct sxe2_tx_queue *txq = NULL;
+
+ txq = dev->data->tx_queues[queue_id];
+ if (txq == NULL) {
+ PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id);
+ goto end;
+ }
+
+ qinfo->nb_desc = txq->ring_depth;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_free_thresh = txq->free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+ qinfo->conf.offloads = txq->offloads;
+ qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+
+end:
+ return;
+}
+
+int32_t __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_tx_queue *txq;
+ int32_t ret;
+ PMD_INIT_FUNC_TRACE();
+
+ if (dev->data->tx_queue_state[queue_id] ==
+ RTE_ETH_QUEUE_STATE_STOPPED) {
+ ret = 0;
+ goto l_end;
+ }
+
+ txq = dev->data->tx_queues[queue_id];
+ if (txq == NULL) {
+ ret = 0;
+ goto l_end;
+ }
+
+ ret = sxe2_drv_txq_switch(adapter, txq, false);
+ if (ret) {
+ PMD_LOG_ERR(TX, "Failed to switch tx queue %u off",
+ queue_id);
+ goto l_end;
+ }
+
+ txq->ops.mbufs_release(txq);
+ txq->ops.queue_reset(txq);
+ dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+ ret = 0;
+
+l_end:
+ return ret;
+}
+
+static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq)
+{
+ if (txq != NULL) {
+ txq->ops.mbufs_release(txq);
+ txq->ops.buffer_ring_free(txq);
+
+ rte_memzone_free(txq->mz);
+ rte_free(txq);
+ }
+}
+
+void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+ (void)sxe2_tx_queue_stop(dev, queue_idx);
+ sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]);
+ dev->data->tx_queues[queue_idx] = NULL;
+}
+
+void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ uint16_t nb_txq;
+
+ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+ if (data->tx_queues[nb_txq] == NULL)
+ continue;
+
+ sxe2_tx_queue_release(dev, nb_txq);
+ data->tx_queues[nb_txq] = NULL;
+ }
+ data->nb_tx_queues = 0;
+}
+
+static struct sxe2_tx_queue
+*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t ring_depth, uint32_t socket_id)
+{
+ struct sxe2_tx_queue *txq;
+ const struct rte_memzone *tz;
+
+ if (dev->data->tx_queues[queue_idx]) {
+ sxe2_tx_queue_release(dev, queue_idx);
+ dev->data->tx_queues[queue_idx] = NULL;
+ }
+
+ txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq == NULL) {
+ PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx);
+ goto l_end;
+ }
+
+ tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx,
+ sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC,
+ SXE2_DESC_ADDR_ALIGN, socket_id);
+ if (tz == NULL) {
+ PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx);
+ rte_free(txq);
+ txq = NULL;
+ goto l_end;
+ }
+
+ txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring",
+ sizeof(struct sxe2_tx_buffer) * ring_depth,
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq->buffer_ring == NULL) {
+ PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx);
+ rte_memzone_free(tz);
+ rte_free(txq);
+ txq = NULL;
+ goto l_end;
+ }
+
+ txq->mz = tz;
+ txq->base_addr = tz->iova;
+ txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr;
+
+l_end:
+ return txq;
+}
+
+int32_t __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t nb_desc, uint32_t socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ int32_t ret = 0;
+ uint16_t tx_rs_thresh;
+ uint16_t tx_free_thresh;
+ struct sxe2_tx_queue *txq;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi;
+ uint64_t offloads;
+ PMD_INIT_FUNC_TRACE();
+
+ ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf);
+ if (ret) {
+ PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx);
+ goto end;
+ }
+
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+ txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id);
+ if (txq == NULL) {
+ PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx);
+ ret = -ENOMEM;
+ goto end;
+ }
+
+ txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1;
+ txq->ring_depth = nb_desc;
+ txq->rs_thresh = tx_rs_thresh;
+ txq->free_thresh = tx_free_thresh;
+ txq->pthresh = tx_conf->tx_thresh.pthresh;
+ txq->hthresh = tx_conf->tx_thresh.hthresh;
+ txq->wthresh = tx_conf->tx_thresh.wthresh;
+ txq->queue_id = queue_idx;
+ txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx;
+ txq->port_id = dev->data->port_id;
+ txq->offloads = offloads;
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+ txq->vsi = vsi;
+ txq->ops = sxe2_tx_default_ops_get();
+ txq->ops.queue_reset(txq);
+
+ dev->data->tx_queues[queue_idx] = txq;
+ ret = 0;
+
+end:
+ return ret;
+}
+
+int32_t __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ int32_t ret = 0;
+ struct sxe2_tx_queue *txq;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ PMD_INIT_FUNC_TRACE();
+
+ if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) {
+ ret = 0;
+ goto l_end;
+ }
+
+ txq = dev->data->tx_queues[queue_id];
+ if (txq == NULL) {
+ PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1);
+ if (ret) {
+ PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id);
+
+ (void)sxe2_drv_txq_switch(adapter, txq, false);
+ txq->ops.mbufs_release(txq);
+ txq->ops.queue_reset(txq);
+ goto l_end;
+ }
+
+ sxe2_tx_tail_init(adapter, txq);
+
+ dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+ ret = 0;
+
+l_end:
+ return ret;
+}
+
+int32_t __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct sxe2_tx_queue *txq;
+ uint16_t nb_txq;
+ uint16_t nb_started_txq;
+ int32_t ret;
+ PMD_INIT_FUNC_TRACE();
+
+ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+ txq = dev->data->tx_queues[nb_txq];
+ if (!txq || txq->tx_deferred_start)
+ continue;
+
+ ret = sxe2_tx_queue_start(dev, nb_txq);
+ if (ret) {
+ PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq);
+ goto l_free_started_queue;
+ }
+ }
+ ret = 0;
+ goto l_end;
+
+l_free_started_queue:
+ for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++)
+ (void)sxe2_tx_queue_stop(dev, nb_started_txq);
+
+l_end:
+ return ret;
+}
+
+void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ uint16_t nb_txq;
+ int32_t ret;
+
+ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+ ret = sxe2_tx_queue_stop(dev, nb_txq);
+ if (ret) {
+ PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq);
+ continue;
+ }
+ }
+}
diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h
new file mode 100644
index 0000000000..cec1179b5a
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_tx.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_TX_H__
+#define __SXE2_TX_H__
+#include "sxe2_queue.h"
+
+void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq);
+
+int32_t __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id);
+
+void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq);
+
+int32_t __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id);
+
+int32_t __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t nb_desc, uint32_t socket_id,
+ const struct rte_eth_txconf *tx_conf);
+
+void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
+
+void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev);
+
+void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
+int32_t __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev);
+
+void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev);
+
+#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 09/11] drivers: add data path for Rx and Tx
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (7 preceding siblings ...)
2026-05-16 2:55 ` [PATCH v14 08/11] net/sxe2: support queue setup and control liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 10/11] net/sxe2: add vectorized " liujie5
2026-05-16 2:55 ` [PATCH v14 11/11] net/sxe2: implement Tx done cleanup liujie5
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Implement receive and transmit burst functions for sxe2 PMD.
Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path
interfaces.
The implementation includes:
- Efficient descriptor fetching and mbuf allocation for Rx.
- Descriptor setup and checksum offload handling for Tx.
- Buffer recycling and hardware tail pointer updates.
- Performance-oriented loop unrolling and prefetching where applicable.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/sxe2_ioctl_chnl.c | 8 +-
drivers/common/sxe2/sxe2_osal.h | 3 +-
drivers/net/sxe2/meson.build | 2 +
drivers/net/sxe2/sxe2_ethdev.c | 13 +-
drivers/net/sxe2/sxe2_txrx.c | 246 +++++++
drivers/net/sxe2/sxe2_txrx.h | 21 +
drivers/net/sxe2/sxe2_txrx_poll.c | 919 ++++++++++++++++++++++++++
7 files changed, 1205 insertions(+), 7 deletions(-)
create mode 100644 drivers/net/sxe2/sxe2_txrx.c
create mode 100644 drivers/net/sxe2/sxe2_txrx.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c
index 08df9373d7..0522dc68b5 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl.c
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c
@@ -177,13 +177,13 @@ void
goto l_err;
}
- PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"",
+ PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"",
bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset));
virt = mmap(NULL, len, PROT_READ | PROT_WRITE,
MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset));
if (virt == MAP_FAILED) {
- PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s",
+ PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s",
cmd_fd, len, offset, strerror(errno));
goto l_err;
}
@@ -205,12 +205,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len)
goto l_end;
}
- PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx",
+ PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"",
virt, len);
ret = munmap(virt, len);
if (ret < 0) {
- PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s",
+ PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s",
virt, len, strerror(errno));
ret = -errno;
goto l_end;
diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h
index c3a395750a..073f3edbdb 100644
--- a/drivers/common/sxe2/sxe2_osal.h
+++ b/drivers/common/sxe2/sxe2_osal.h
@@ -22,8 +22,7 @@
#endif
#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG)
#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG))
-
-#define BITS_PER_BYTE 8
+#define BITS_PER_BYTE 8
#define IS_UNICAST_ETHER_ADDR(addr) \
((bool)((((uint8_t *)(addr))[0] % ((uint8_t)0x2)) == 0))
diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build
index 3dfe54903a..5645e3ad61 100644
--- a/drivers/net/sxe2/meson.build
+++ b/drivers/net/sxe2/meson.build
@@ -20,6 +20,8 @@ sources += files(
'sxe2_queue.c',
'sxe2_tx.c',
'sxe2_rx.c',
+ 'sxe2_txrx_poll.c',
+ 'sxe2_txrx.c',
)
allow_internal_get_api = true
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
index acbfcafd63..0066eb67b1 100644
--- a/drivers/net/sxe2/sxe2_ethdev.c
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -26,6 +26,7 @@
#include "sxe2_cmd_chnl.h"
#include "sxe2_tx.h"
#include "sxe2_rx.h"
+#include "sxe2_txrx.h"
#include "sxe2_common.h"
#include "sxe2_common_log.h"
#include "sxe2_host_regs.h"
@@ -137,6 +138,9 @@ static int32_t sxe2_dev_start(struct rte_eth_dev *dev)
goto l_end;
}
+ sxe2_rx_mode_func_set(dev);
+ sxe2_tx_mode_func_set(dev);
+
ret = sxe2_queues_start(dev);
if (ret) {
PMD_LOG_ERR(INIT, "enable queues failed");
@@ -760,10 +764,17 @@ static int32_t sxe2_dev_init(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
+ sxe2_set_common_function(dev);
+
dev->dev_ops = &sxe2_eth_dev_ops;
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ sxe2_rx_mode_func_set(dev);
+ sxe2_tx_mode_func_set(dev);
+ if (ret != 0)
+ PMD_LOG_ERR(INIT, "Failed to mp init (secondary), ret=%d", ret);
goto l_end;
+ }
ret = sxe2_hw_init(dev);
if (ret) {
diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c
new file mode 100644
index 0000000000..2531b49a52
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx.c
@@ -0,0 +1,246 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <ethdev_driver.h>
+#include <unistd.h>
+
+#include "sxe2_txrx.h"
+#include "sxe2_txrx_common.h"
+#include "sxe2_txrx_poll.h"
+#include "sxe2_ethdev.h"
+
+#include "sxe2_common_log.h"
+#include "sxe2_osal.h"
+#include "sxe2_cmd_chnl.h"
+#if defined(RTE_ARCH_ARM64)
+#include <rte_cpuflags.h>
+#endif
+
+static int32_t sxe2_tx_desciptor_status(void *tx_queue, uint16_t offset)
+{
+ struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue;
+ int32_t ret;
+ uint16_t desc_idx;
+
+ if (unlikely(offset >= txq->ring_depth)) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ desc_idx = txq->next_use + offset;
+ desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh);
+ if (desc_idx >= txq->ring_depth) {
+ desc_idx -= txq->ring_depth;
+ if (desc_idx >= txq->ring_depth)
+ desc_idx -= txq->ring_depth;
+ }
+
+ if (desc_idx == 0)
+ desc_idx = txq->rs_thresh - 1;
+ else
+ desc_idx -= 1;
+
+ if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) ==
+ (txq->desc_ring[desc_idx].wb.dd &
+ rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK)))
+ ret = RTE_ETH_TX_DESC_DONE;
+ else
+ ret = RTE_ETH_TX_DESC_FULL;
+
+l_end:
+ return ret;
+}
+
+static inline int32_t sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf)
+{
+ struct rte_mbuf *m_seg = mbuf;
+
+ while (m_seg != NULL) {
+ if (m_seg->data_len == 0)
+ return -EINVAL;
+ m_seg = m_seg->next;
+ }
+
+ return 0;
+}
+
+uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_tx_queue *txq = tx_queue;
+ struct rte_mbuf *mbuf;
+ uint64_t ol_flags = 0;
+ int32_t ret = 0;
+ int32_t i = 0;
+
+ for (i = 0; i < nb_pkts; i++) {
+ mbuf = tx_pkts[i];
+ if (!mbuf)
+ continue;
+ ol_flags = mbuf->ol_flags;
+ if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) {
+ if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX ||
+ mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) {
+ rte_errno = -EINVAL;
+ goto l_end;
+ }
+ } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) ||
+ (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) ||
+ (mbuf->nb_segs > txq->ring_depth) ||
+ (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) {
+ rte_errno = -EINVAL;
+ goto l_end;
+ }
+
+ if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) {
+ rte_errno = -EINVAL;
+ goto l_end;
+ }
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ ret = rte_validate_tx_offload(mbuf);
+ if (ret != 0) {
+ rte_errno = -ret;
+ goto l_end;
+ }
+#endif
+ ret = rte_net_intel_cksum_prepare(mbuf);
+ if (ret != 0) {
+ rte_errno = -ret;
+ goto l_end;
+ }
+
+ ret = sxe2_tx_mbuf_empty_check(mbuf);
+ if (ret != 0) {
+ rte_errno = -ret;
+ goto l_end;
+ }
+ }
+
+l_end:
+ return i;
+}
+
+void sxe2_tx_mode_func_set(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ uint32_t tx_mode_flags = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ dev->tx_pkt_prepare = sxe2_tx_pkts_prepare;
+ dev->tx_pkt_burst = sxe2_tx_pkts;
+ adapter->q_ctxt.tx_mode_flags = tx_mode_flags;
+ PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.",
+ tx_mode_flags, dev->data->port_id);
+}
+
+static int32_t sxe2_rx_desciptor_status(void *rx_queue, uint16_t offset)
+{
+ struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue;
+ volatile union sxe2_rx_desc *desc;
+ int32_t ret;
+
+ if (unlikely(offset >= rxq->ring_depth)) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if (offset >= rxq->ring_depth - rxq->hold_num) {
+ ret = RTE_ETH_RX_DESC_UNAVAIL;
+ goto l_end;
+ }
+
+ if (rxq->processing_idx + offset >= rxq->ring_depth)
+ desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth];
+ else
+ desc = &rxq->desc_ring[rxq->processing_idx + offset];
+
+ if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK)
+ ret = RTE_ETH_RX_DESC_DONE;
+ else
+ ret = RTE_ETH_RX_DESC_AVAIL;
+
+l_end:
+ PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u",
+ offset, ret, rxq->queue_id, rxq->port_id);
+ return ret;
+}
+
+static int32_t sxe2_rx_queue_count(void *rx_queue)
+{
+ struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue;
+ volatile union sxe2_rx_desc *desc;
+ uint16_t done_num = 0;
+
+ desc = &rxq->desc_ring[rxq->processing_idx];
+ while ((done_num < rxq->ring_depth) &&
+ (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) &
+ SXE2_RX_DESC_STATUS_DD_MASK)) {
+ done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM;
+ if (rxq->processing_idx + done_num >= rxq->ring_depth)
+ desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth];
+ else
+ desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM;
+ }
+
+ PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u",
+ done_num, rxq->queue_id, rxq->port_id);
+
+ return done_num;
+}
+
+static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, uint64_t offload)
+{
+ struct sxe2_rx_queue *rxq;
+ bool en = false;
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i];
+ if (rxq == NULL)
+ continue;
+
+ if (0 != (rxq->offloads & offload)) {
+ en = true;
+ goto l_end;
+ }
+ }
+
+l_end:
+ return en;
+}
+
+void sxe2_rx_mode_func_set(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ uint32_t rx_mode_flags = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT))
+ dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split;
+ else
+ dev->rx_pkt_burst = sxe2_rx_pkts_scattered;
+
+ PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.",
+ rx_mode_flags, dev->data->port_id);
+ adapter->q_ctxt.rx_mode_flags = rx_mode_flags;
+}
+
+void sxe2_set_common_function(struct rte_eth_dev *dev)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ dev->rx_queue_count = sxe2_rx_queue_count;
+ dev->rx_descriptor_status = sxe2_rx_desciptor_status;
+
+ dev->tx_descriptor_status = sxe2_tx_desciptor_status;
+ dev->tx_pkt_prepare = sxe2_tx_pkts_prepare;
+}
diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h
new file mode 100644
index 0000000000..f81c11ac56
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef SXE2_TXRX_H
+#define SXE2_TXRX_H
+#include <ethdev_driver.h>
+#include "sxe2_queue.h"
+
+void sxe2_set_common_function(struct rte_eth_dev *dev);
+
+uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+void sxe2_tx_mode_func_set(struct rte_eth_dev *dev);
+
+void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq);
+
+void sxe2_rx_mode_func_set(struct rte_eth_dev *dev);
+
+#endif
diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c
new file mode 100644
index 0000000000..6d37fdef36
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_poll.c
@@ -0,0 +1,919 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <ethdev_driver.h>
+#include <unistd.h>
+
+#include "sxe2_osal.h"
+#include "sxe2_txrx_common.h"
+#include "sxe2_txrx_poll.h"
+#include "sxe2_txrx.h"
+#include "sxe2_queue.h"
+#include "sxe2_ethdev.h"
+#include "sxe2_common_log.h"
+
+static __rte_always_inline int32_t
+sxe2_tx_bufs_free(struct sxe2_tx_queue *txq)
+{
+ struct sxe2_tx_buffer *buffer;
+ struct rte_mbuf *mbuf;
+ struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX];
+ int32_t ret;
+ uint32_t i;
+ uint16_t rs_thresh;
+ uint16_t free_num;
+ if ((txq->desc_ring[txq->next_dd].wb.dd &
+ rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) !=
+ rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) {
+ ret = 0;
+ goto l_end;
+ }
+ rs_thresh = txq->rs_thresh;
+ buffer = &txq->buffer_ring[txq->next_dd - rs_thresh + 1];
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (likely(rs_thresh <= SXE2_TX_FREE_BUFFER_SIZE_MAX)) {
+ mbuf = buffer[0].mbuf;
+ mbuf_free_arr[0] = mbuf;
+ free_num = 1;
+ for (i = 1; i < rs_thresh; ++i) {
+ mbuf = buffer[i].mbuf;
+ if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) {
+ mbuf_free_arr[free_num] = mbuf;
+ free_num++;
+ } else {
+ rte_mempool_put_bulk(mbuf_free_arr[0]->pool,
+ (void *)mbuf_free_arr, free_num);
+ mbuf_free_arr[0] = mbuf;
+ free_num = 1;
+ }
+ }
+ rte_mempool_put_bulk(mbuf_free_arr[0]->pool,
+ (void *)mbuf_free_arr, free_num);
+ } else {
+ for (i = 0; i < rs_thresh; ++i, ++buffer) {
+ rte_mempool_put(buffer->mbuf->pool, buffer->mbuf);
+ buffer->mbuf = NULL;
+ }
+ }
+ } else {
+ for (i = 0; i < rs_thresh; ++i, ++buffer) {
+ mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf);
+ if (mbuf != NULL)
+ rte_mempool_put(mbuf->pool, mbuf);
+ buffer->mbuf = NULL;
+ }
+ }
+ txq->desc_free_num += rs_thresh;
+ txq->next_dd += rs_thresh;
+ if (txq->next_dd >= txq->ring_depth)
+ txq->next_dd = rs_thresh - 1;
+ ret = rs_thresh;
+l_end:
+ return ret;
+}
+
+static inline int32_t sxe2_tx_cleanup(struct sxe2_tx_queue *txq)
+{
+ int32_t ret = 0;
+ volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring;
+ struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring;
+ uint16_t ring_depth = txq->ring_depth;
+ uint16_t next_clean = txq->next_clean;
+ uint16_t clean_last;
+ uint16_t clean_num;
+
+ clean_last = next_clean + txq->rs_thresh;
+ if (clean_last >= ring_depth)
+ clean_last = clean_last - ring_depth;
+
+ clean_last = buffer_ring[clean_last].last_id;
+ if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) !=
+ (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) {
+ PMD_LOG_DEBUG(TX, "desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64,
+ clean_last, txq->port_id,
+ txq->queue_id, txq->desc_ring[clean_last].wb.dd);
+ ret = -1;
+ goto l_end;
+ }
+
+ if (clean_last > next_clean)
+ clean_num = clean_last - next_clean;
+ else
+ clean_num = ring_depth - next_clean + clean_last;
+
+ desc_ring[clean_last].wb.dd = 0;
+
+ txq->next_clean = clean_last;
+ txq->desc_free_num += clean_num;
+
+ ret = 0;
+
+l_end:
+ return ret;
+}
+
+static __rte_always_inline uint16_t
+sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt)
+{
+ struct rte_mbuf *m_seg = tx_pkt;
+ uint16_t count = 0;
+
+ while (m_seg != NULL) {
+ count += DIV_ROUND_UP(m_seg->data_len,
+ SXE2_TX_MAX_DATA_NUM_PER_DESC);
+ m_seg = m_seg->next;
+ }
+
+ return count;
+}
+
+static __rte_always_inline void
+sxe2_tx_desc_checksum_fill(uint64_t offloads, uint32_t *desc_cmd, uint32_t *desc_offset,
+ union sxe2_tx_offload_info ol_info)
+{
+ if (offloads & RTE_MBUF_F_TX_IP_CKSUM) {
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM;
+ *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len);
+ } else if (offloads & RTE_MBUF_F_TX_IPV4) {
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4;
+ *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len);
+ } else if (offloads & RTE_MBUF_F_TX_IPV6) {
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6;
+ *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len);
+ }
+
+ if (offloads & RTE_MBUF_F_TX_TCP_SEG) {
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP;
+ *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len);
+ goto l_end;
+ }
+
+ if (offloads & RTE_MBUF_F_TX_UDP_SEG) {
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP;
+ *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len);
+ goto l_end;
+ }
+
+ switch (offloads & RTE_MBUF_F_TX_L4_MASK) {
+ case RTE_MBUF_F_TX_TCP_CKSUM:
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP;
+ *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len);
+ break;
+ case RTE_MBUF_F_TX_SCTP_CKSUM:
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP;
+ *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len);
+ break;
+ case RTE_MBUF_F_TX_UDP_CKSUM:
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP;
+ *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len);
+ break;
+ default:
+
+ break;
+ }
+
+l_end:
+ return;
+}
+
+static __rte_always_inline uint64_t
+sxe2_tx_data_desc_build_cobt(uint32_t cmd, uint32_t offset, uint16_t buf_size, uint16_t l2tag)
+{
+ return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA |
+ (((uint64_t)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) |
+ (((uint64_t)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) |
+ (((uint64_t)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) |
+ (((uint64_t)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT));
+}
+
+uint16_t sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_tx_queue *txq = tx_queue;
+ struct sxe2_tx_buffer *buffer_ring;
+ struct sxe2_tx_buffer *buffer;
+ struct sxe2_tx_buffer *next_buffer;
+ struct rte_mbuf *tx_pkt;
+ struct rte_mbuf *m_seg;
+ volatile union sxe2_tx_data_desc *desc_ring;
+ volatile union sxe2_tx_data_desc *desc;
+ volatile struct sxe2_tx_context_desc *ctxt_desc;
+ union sxe2_tx_offload_info ol_info;
+ struct sxe2_vsi *vsi = txq->vsi;
+ rte_iova_t buf_dma_addr;
+ uint64_t offloads;
+ uint64_t desc_type_cmd_tso_mss;
+ uint32_t desc_cmd;
+ uint32_t desc_offset;
+ uint32_t desc_tag;
+ uint32_t desc_tunneling_params;
+ uint16_t ipsec_offset;
+ uint16_t ctxt_desc_num;
+ uint16_t desc_sum_num;
+ uint16_t tx_num;
+ uint16_t seg_len;
+ uint16_t next_use;
+ uint16_t last_use;
+ uint16_t desc_l2tag2;
+
+ buffer_ring = txq->buffer_ring;
+ desc_ring = txq->desc_ring;
+ next_use = txq->next_use;
+ buffer = &buffer_ring[next_use];
+
+ if (txq->desc_free_num < txq->free_thresh)
+ (void)sxe2_tx_cleanup(txq);
+
+ for (tx_num = 0; tx_num < nb_pkts; tx_num++) {
+ tx_pkt = *tx_pkts++;
+ desc_cmd = 0;
+ desc_offset = 0;
+ desc_tag = 0;
+ desc_tunneling_params = 0;
+ ipsec_offset = 0;
+ offloads = tx_pkt->ol_flags;
+ ol_info.l2_len = tx_pkt->l2_len;
+ ol_info.l3_len = tx_pkt->l3_len;
+ ol_info.l4_len = tx_pkt->l4_len;
+ ol_info.tso_segsz = tx_pkt->tso_segsz;
+ ol_info.outer_l2_len = tx_pkt->outer_l2_len;
+ ol_info.outer_l3_len = tx_pkt->outer_l3_len;
+
+ ctxt_desc_num = (offloads &
+ SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0;
+ if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW))
+ ctxt_desc_num = 1;
+
+ if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))
+ desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num;
+ else
+ desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num;
+
+ last_use = next_use + desc_sum_num - 1;
+ if (last_use >= txq->ring_depth)
+ last_use = last_use - txq->ring_depth;
+
+ if (desc_sum_num > txq->desc_free_num) {
+ if (unlikely(sxe2_tx_cleanup(txq) != 0))
+ goto l_exit_logic;
+
+ if (unlikely(desc_sum_num > txq->rs_thresh)) {
+ while (desc_sum_num > txq->desc_free_num)
+ if (unlikely(sxe2_tx_cleanup(txq) != 0))
+ goto l_exit_logic;
+ }
+ }
+
+ desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len);
+
+ if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) {
+ sxe2_tx_desc_checksum_fill(offloads, &desc_cmd,
+ &desc_offset, ol_info);
+ }
+
+ if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1;
+ desc_tag = tx_pkt->vlan_tci;
+ }
+
+ if (ctxt_desc_num) {
+ ctxt_desc = (volatile struct sxe2_tx_context_desc *)
+ &desc_ring[next_use];
+ desc_l2tag2 = 0;
+ desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT;
+
+ next_buffer = &buffer_ring[buffer->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf);
+
+ if (buffer->mbuf) {
+ rte_pktmbuf_free_seg(buffer->mbuf);
+ buffer->mbuf = NULL;
+ }
+
+ if (offloads & RTE_MBUF_F_TX_QINQ) {
+ desc_l2tag2 = tx_pkt->vlan_tci_outer;
+ desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK;
+ }
+
+ ctxt_desc->tunneling_params =
+ rte_cpu_to_le_32(desc_tunneling_params);
+ ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2);
+ ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss);
+ ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset);
+
+ buffer->last_id = last_use;
+ next_use = buffer->next_id;
+ buffer = next_buffer;
+ }
+
+ m_seg = tx_pkt;
+
+ do {
+ desc = &desc_ring[next_use];
+ next_buffer = &buffer_ring[buffer->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf);
+ if (buffer->mbuf) {
+ rte_pktmbuf_free_seg(buffer->mbuf);
+ buffer->mbuf = NULL;
+ }
+
+ buffer->mbuf = m_seg;
+ seg_len = m_seg->data_len;
+ buf_dma_addr = rte_mbuf_data_iova(m_seg);
+ while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) &&
+ unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) {
+ desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ desc->read.type_cmd_off_bsz_l2t =
+ sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset,
+ SXE2_TX_MAX_DATA_NUM_PER_DESC,
+ desc_tag);
+ buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC;
+ seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC;
+ buffer->last_id = last_use;
+ next_use = buffer->next_id;
+ buffer = next_buffer;
+ desc = &desc_ring[next_use];
+ next_buffer = &buffer_ring[buffer->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf);
+ }
+
+ desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ desc->read.type_cmd_off_bsz_l2t =
+ sxe2_tx_data_desc_build_cobt(desc_cmd,
+ desc_offset, seg_len, desc_tag);
+
+ buffer->last_id = last_use;
+ next_use = buffer->next_id;
+ buffer = next_buffer;
+
+ m_seg = m_seg->next;
+ } while (m_seg);
+
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP;
+ txq->desc_used_num += desc_sum_num;
+ txq->desc_free_num -= desc_sum_num;
+
+ if (txq->desc_used_num >= txq->rs_thresh) {
+ PMD_LOG_DEBUG(TX, "Tx pkts set RS bit."
+ "last_use=%u port_id=%u, queue_id=%u",
+ last_use, txq->port_id, txq->queue_id);
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS;
+ txq->desc_used_num = 0;
+ }
+
+ desc->read.type_cmd_off_bsz_l2t |=
+ rte_cpu_to_le_64(((uint64_t)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT);
+ }
+
+l_exit_logic:
+ if (tx_num == 0)
+ goto l_end;
+ goto l_end_of_tx;
+l_end_of_tx:
+ SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use);
+ PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u",
+ txq->port_id, txq->queue_id, next_use, tx_num);
+
+ txq->next_use = next_use;
+
+l_end:
+ return tx_num;
+}
+
+static __rte_always_inline void
+sxe2_tx_data_desc_fill(volatile union sxe2_tx_data_desc *desc,
+ struct rte_mbuf **tx_pkts)
+{
+ rte_iova_t buf_dma_addr;
+ uint32_t desc_offset;
+ buf_dma_addr = rte_mbuf_data_iova(*tx_pkts);
+ desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len);
+ desc->read.type_cmd_off_bsz_l2t =
+ sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP,
+ desc_offset, (*tx_pkts)->data_len, 0);
+}
+static __rte_always_inline void
+sxe2_tx_data_desc_fill_batch(volatile union sxe2_tx_data_desc *desc,
+ struct rte_mbuf **tx_pkts)
+{
+ rte_iova_t buf_dma_addr;
+ uint32_t i;
+ uint32_t desc_offset;
+ for (i = 0; i < SXE2_TX_FILL_PER_LOOP; ++i, ++desc, ++tx_pkts) {
+ buf_dma_addr = rte_mbuf_data_iova(*tx_pkts);
+ desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len);
+ desc->read.type_cmd_off_bsz_l2t =
+ sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP,
+ desc_offset,
+ (*tx_pkts)->data_len,
+ 0);
+ }
+}
+
+static inline void sxe2_tx_ring_fill(struct sxe2_tx_queue *txq,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_tx_buffer *buffer = &txq->buffer_ring[txq->next_use];
+ volatile union sxe2_tx_data_desc *desc = &txq->desc_ring[txq->next_use];
+ uint32_t i, j;
+ uint32_t mainpart;
+ uint32_t leftover;
+ mainpart = nb_pkts & ((uint32_t)~SXE2_TX_FILL_PER_LOOP_MASK);
+ leftover = nb_pkts & ((uint32_t)SXE2_TX_FILL_PER_LOOP_MASK);
+ for (i = 0; i < mainpart; i += SXE2_TX_FILL_PER_LOOP) {
+ for (j = 0; j < SXE2_TX_FILL_PER_LOOP; ++j)
+ (buffer + i + j)->mbuf = *(tx_pkts + i + j);
+ sxe2_tx_data_desc_fill_batch(desc + i, tx_pkts + i);
+ }
+ if (unlikely(leftover > 0)) {
+ for (i = 0; i < leftover; ++i) {
+ (buffer + mainpart + i)->mbuf = *(tx_pkts + mainpart + i);
+ sxe2_tx_data_desc_fill(desc + mainpart + i,
+ tx_pkts + mainpart + i);
+ }
+ }
+}
+
+static inline uint16_t sxe2_tx_pkts_batch(void *tx_queue,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue;
+ volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring;
+ uint16_t res_num = 0;
+ if (txq->desc_free_num < txq->free_thresh)
+ (void)sxe2_tx_bufs_free(txq);
+ nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts);
+ if (unlikely(nb_pkts == 0)) {
+ PMD_LOG_DEBUG(TX, "Tx batch: may not enough free desc, "
+ "free_desc=%u, need_tx_pkts=%u",
+ txq->desc_free_num, nb_pkts);
+ goto l_end;
+ }
+ txq->desc_free_num -= nb_pkts;
+ if ((txq->next_use + nb_pkts) > txq->ring_depth) {
+ res_num = txq->ring_depth - txq->next_use;
+ sxe2_tx_ring_fill(txq, tx_pkts, res_num);
+ desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |=
+ rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK);
+ txq->next_rs = txq->rs_thresh - 1;
+ txq->next_use = 0;
+ }
+ sxe2_tx_ring_fill(txq, tx_pkts + res_num, nb_pkts - res_num);
+ txq->next_use = txq->next_use + (nb_pkts - res_num);
+ if (txq->next_use > txq->next_rs) {
+ desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |=
+ rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK);
+ txq->next_rs += txq->rs_thresh;
+ if (txq->next_rs >= txq->ring_depth)
+ txq->next_rs = txq->rs_thresh - 1;
+ }
+ if (txq->next_use >= txq->ring_depth)
+ txq->next_use = 0;
+ PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u",
+ txq->port_id, txq->queue_id, txq->next_use, nb_pkts);
+ SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, txq->next_use);
+l_end:
+ return nb_pkts;
+}
+
+static inline void
+sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, uint16_t hold_num, uint16_t rx_id)
+{
+ hold_num += rxq->hold_num;
+
+ if (hold_num > rxq->rx_free_thresh) {
+ rx_id = (uint16_t)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1));
+ SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id);
+ hold_num = 0;
+ }
+ rxq->hold_num = hold_num;
+}
+
+static inline uint64_t
+sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq,
+ union sxe2_rx_desc *desc)
+{
+ uint64_t flags = 0;
+ uint64_t desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len);
+
+ if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK)))
+ goto l_end;
+
+ if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK)))
+ flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+ else
+ flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+
+ if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) {
+ flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+ RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD);
+ goto l_end;
+ }
+
+ if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK)))
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ else
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+
+ if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK)))
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+ else
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+
+ if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK)))
+ flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+l_end:
+ return flags;
+}
+
+static __rte_always_inline void
+sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf,
+ union sxe2_rx_desc *rxd)
+{
+ uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+ uint64_t qword1;
+ uint64_t pkt_flags;
+ qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len);
+
+ mbuf->ol_flags = 0;
+ mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)];
+
+ pkt_flags = sxe2_rx_desc_error_para(rxq, rxd);
+
+ mbuf->ol_flags |= pkt_flags;
+}
+
+static __rte_always_inline void
+sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf,
+ union sxe2_rx_desc *rxd)
+{
+ uint64_t qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes,
+ mbuf->pkt_len + RTE_ETHER_CRC_LEN,
+ rte_memory_order_relaxed);
+ switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) {
+ case SXE2_RX_DESC_STATUS_UNICAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ case SXE2_RX_DESC_STATUS_MUTICAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ case SXE2_RX_DESC_STATUS_BOARDCAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ default:
+ break;
+ }
+}
+
+uint16_t sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue;
+ volatile union sxe2_rx_desc *desc_ring;
+ volatile union sxe2_rx_desc *desc;
+ union sxe2_rx_desc desc_tmp;
+ struct rte_mbuf **buffer_ring;
+ struct rte_mbuf **cur_buffer;
+ struct rte_mbuf *cur_mbuf;
+ struct rte_mbuf *new_mbuf;
+ struct rte_mbuf *first_seg;
+ struct rte_mbuf *last_seg;
+ uint64_t qword1;
+ uint16_t done_num;
+ uint16_t hold_num;
+ uint16_t cur_idx;
+ uint16_t pkt_len;
+
+ desc_ring = rxq->desc_ring;
+ buffer_ring = rxq->buffer_ring;
+ cur_idx = rxq->processing_idx;
+ first_seg = rxq->pkt_first_seg;
+ last_seg = rxq->pkt_last_seg;
+ done_num = 0;
+ hold_num = 0;
+ while (done_num < nb_pkts) {
+ desc = &desc_ring[cur_idx];
+ qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len);
+ if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1))
+ break;
+
+ new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
+ if (unlikely(new_mbuf == NULL)) {
+ rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++;
+ PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u "
+ "queue_id:%u", rxq->port_id, rxq->queue_id);
+ break;
+ }
+
+ hold_num++;
+ desc_tmp = *desc;
+ cur_buffer = &buffer_ring[cur_idx];
+ cur_idx++;
+ if (unlikely(cur_idx == rxq->ring_depth))
+ cur_idx = 0;
+
+ rte_prefetch0(buffer_ring[cur_idx]);
+
+ if (0 == (cur_idx & 0x3)) {
+ rte_prefetch0(&desc_ring[cur_idx]);
+ rte_prefetch0(&buffer_ring[cur_idx]);
+ }
+
+ cur_mbuf = *cur_buffer;
+
+ *cur_buffer = new_mbuf;
+
+ desc->read.hdr_addr = 0;
+ desc->read.pkt_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf));
+
+ pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1);
+ cur_mbuf->data_len = pkt_len;
+ cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+
+ if (first_seg == NULL) {
+ first_seg = cur_mbuf;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = pkt_len;
+ } else {
+ first_seg->pkt_len += pkt_len;
+ first_seg->nb_segs++;
+ last_seg->next = cur_mbuf;
+ }
+
+ if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) {
+ last_seg = cur_mbuf;
+ continue;
+ }
+
+ if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) ||
+ unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) {
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes,
+ first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN,
+ rte_memory_order_relaxed);
+ rte_pktmbuf_free(first_seg);
+ first_seg = NULL;
+ continue;
+ }
+
+ cur_mbuf->next = NULL;
+ if (unlikely(rxq->crc_len > 0)) {
+ first_seg->pkt_len -= RTE_ETHER_CRC_LEN;
+
+ if (pkt_len <= RTE_ETHER_CRC_LEN) {
+ rte_pktmbuf_free_seg(cur_mbuf);
+ first_seg->nb_segs--;
+ last_seg->data_len = last_seg->data_len + pkt_len -
+ RTE_ETHER_CRC_LEN;
+ last_seg->next = NULL;
+ } else {
+ cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN;
+ }
+
+ } else if (pkt_len == 0) {
+ rte_pktmbuf_free_seg(cur_mbuf);
+ first_seg->nb_segs--;
+ last_seg->next = NULL;
+ }
+
+ rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off));
+ first_seg->port = rxq->port_id;
+
+ sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp);
+
+ if (rxq->vsi->adapter->devargs.sw_stats_en)
+ sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp);
+
+ rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off));
+
+ rx_pkts[done_num] = first_seg;
+ done_num++;
+
+ first_seg = NULL;
+ }
+
+ rxq->processing_idx = cur_idx;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
+
+ sxe2_update_rx_tail(rxq, hold_num, cur_idx);
+
+ return done_num;
+}
+
+uint16_t sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue;
+ volatile union sxe2_rx_desc *desc_ring;
+ volatile union sxe2_rx_desc *desc;
+ union sxe2_rx_desc desc_tmp;
+ struct rte_mbuf **buffer_ring;
+ struct rte_mbuf **cur_buffer;
+ struct rte_mbuf *cur_mbuf;
+ struct rte_mbuf *cur_mbuf_pay;
+ struct rte_mbuf *new_mbuf;
+ struct rte_mbuf *new_mbuf_pay = NULL;
+ struct rte_mbuf *first_seg;
+ struct rte_mbuf *last_seg;
+ uint64_t qword1;
+ uint16_t done_num;
+ uint16_t hold_num;
+ uint16_t cur_idx;
+ uint16_t pkt_len;
+ uint16_t hdr_len;
+
+ desc_ring = rxq->desc_ring;
+ buffer_ring = rxq->buffer_ring;
+ cur_idx = rxq->processing_idx;
+ first_seg = rxq->pkt_first_seg;
+ last_seg = rxq->pkt_last_seg;
+ done_num = 0;
+ hold_num = 0;
+ new_mbuf = NULL;
+
+ while (done_num < nb_pkts) {
+ desc = &desc_ring[cur_idx];
+ qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len);
+
+ if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1))
+ break;
+
+ if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 ||
+ first_seg == NULL) {
+ new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
+ if (unlikely(new_mbuf == NULL)) {
+ rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++;
+ break;
+ }
+ }
+
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+ new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp);
+ if (unlikely(new_mbuf_pay == NULL)) {
+ rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++;
+ if (new_mbuf != NULL)
+ rte_pktmbuf_free(new_mbuf);
+ new_mbuf = NULL;
+ break;
+ }
+ }
+
+ hold_num++;
+ desc_tmp = *desc;
+ cur_buffer = &buffer_ring[cur_idx];
+ cur_idx++;
+ if (unlikely(cur_idx == rxq->ring_depth))
+ cur_idx = 0;
+ rte_prefetch0(buffer_ring[cur_idx]);
+ if (0 == (cur_idx & 0x3)) {
+ rte_prefetch0(&desc_ring[cur_idx]);
+ rte_prefetch0(&buffer_ring[cur_idx]);
+ }
+ cur_mbuf = *cur_buffer;
+ if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
+ *cur_buffer = new_mbuf;
+ desc->read.hdr_addr = 0;
+ desc->read.pkt_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf));
+ } else {
+ if (first_seg == NULL) {
+ *cur_buffer = new_mbuf;
+ new_mbuf->next = new_mbuf_pay;
+ new_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ new_mbuf_pay->next = NULL;
+ new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM;
+ desc->read.hdr_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf));
+ desc->read.pkt_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay));
+ } else {
+ cur_mbuf_pay = cur_mbuf->next;
+ new_mbuf_pay->next = NULL;
+ new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM;
+ cur_mbuf->next = new_mbuf_pay;
+ desc->read.hdr_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf));
+ desc->read.pkt_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay));
+ cur_mbuf = cur_mbuf_pay;
+ }
+ }
+ new_mbuf = NULL;
+ if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
+ pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1);
+ cur_mbuf->data_len = pkt_len;
+ cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ if (first_seg == NULL) {
+ first_seg = cur_mbuf;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = pkt_len;
+ } else {
+ first_seg->pkt_len += pkt_len;
+ first_seg->nb_segs++;
+ last_seg->next = cur_mbuf;
+ }
+ } else {
+ if (first_seg == NULL) {
+ cur_mbuf->nb_segs = 2;
+ cur_mbuf->next->next = NULL;
+ pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1);
+ hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1);
+ cur_mbuf->data_len = hdr_len;
+ cur_mbuf->pkt_len = hdr_len + pkt_len;
+ cur_mbuf->next->data_len = pkt_len;
+ first_seg = cur_mbuf;
+ cur_mbuf = cur_mbuf->next;
+ last_seg = cur_mbuf;
+ } else {
+ cur_mbuf->nb_segs = 1;
+ cur_mbuf->next = NULL;
+ pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1);
+ cur_mbuf->data_len = pkt_len;
+
+ first_seg->pkt_len += pkt_len;
+ first_seg->nb_segs++;
+ last_seg->next = cur_mbuf;
+ }
+ }
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+
+ rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg));
+#endif
+
+ if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) &
+ SXE2_RX_DESC_STATUS_EOP_MASK)) {
+ last_seg = cur_mbuf;
+ continue;
+ }
+
+ if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) ||
+ unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) {
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes,
+ first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN,
+ rte_memory_order_relaxed);
+ rte_pktmbuf_free(first_seg);
+ first_seg = NULL;
+ continue;
+ }
+
+ cur_mbuf->next = NULL;
+ if (unlikely(rxq->crc_len > 0)) {
+ first_seg->pkt_len -= RTE_ETHER_CRC_LEN;
+ if (pkt_len <= RTE_ETHER_CRC_LEN) {
+ rte_pktmbuf_free_seg(cur_mbuf);
+ cur_mbuf = NULL;
+ first_seg->nb_segs--;
+ last_seg->data_len = last_seg->data_len +
+ pkt_len - RTE_ETHER_CRC_LEN;
+ last_seg->next = NULL;
+ } else {
+ cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN;
+ }
+ } else if (pkt_len == 0) {
+ rte_pktmbuf_free_seg(cur_mbuf);
+ cur_mbuf = NULL;
+ first_seg->nb_segs--;
+ last_seg->next = NULL;
+ }
+
+ first_seg->port = rxq->port_id;
+ sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp);
+
+ if (rxq->vsi->adapter->devargs.sw_stats_en)
+ sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp);
+
+ rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off));
+
+ rx_pkts[done_num] = first_seg;
+ done_num++;
+
+ first_seg = NULL;
+ }
+
+ rxq->processing_idx = cur_idx;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
+
+ sxe2_update_rx_tail(rxq, hold_num, cur_idx);
+
+ return done_num;
+}
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 10/11] net/sxe2: add vectorized Rx and Tx
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (8 preceding siblings ...)
2026-05-16 2:55 ` [PATCH v14 09/11] drivers: add data path for Rx and Tx liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 2:55 ` [PATCH v14 11/11] net/sxe2: implement Tx done cleanup liujie5
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
This patch implements the vectorized data path for the sxe2 PMD.
It utilizes SIMD instructions (e.g., SSE) to process multiple
packets simultaneously, significantly improving throughput for
small packet processing.
The implementation includes:
* Vectorized Rx burst function for bulk descriptor processing.
* Vectorized Tx burst function with optimized resource cleanup.
* Capability flags update to reflect vectorized path support.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/net/sxe2/meson.build | 5 +
drivers/net/sxe2/sxe2_ethdev.c | 31 +-
drivers/net/sxe2/sxe2_queue.c | 28 ++
drivers/net/sxe2/sxe2_queue.h | 4 +
drivers/net/sxe2/sxe2_txrx.c | 197 +++++++--
drivers/net/sxe2/sxe2_txrx.h | 13 +-
drivers/net/sxe2/sxe2_txrx_poll.c | 29 +-
drivers/net/sxe2/sxe2_txrx_poll.h | 4 +
drivers/net/sxe2/sxe2_txrx_vec.c | 200 +++++++++
drivers/net/sxe2/sxe2_txrx_vec.h | 72 ++++
drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 ++++++++++
drivers/net/sxe2/sxe2_txrx_vec_sse.c | 549 ++++++++++++++++++++++++
12 files changed, 1294 insertions(+), 73 deletions(-)
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c
diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build
index 5645e3ad61..3df57aee8c 100644
--- a/drivers/net/sxe2/meson.build
+++ b/drivers/net/sxe2/meson.build
@@ -13,6 +13,10 @@ deps += ['common_sxe2', 'hash','cryptodev','security']
includes += include_directories('../../common/sxe2')
+if arch_subdir == 'x86'
+ sources += files('sxe2_txrx_vec_sse.c')
+endif
+
sources += files(
'sxe2_ethdev.c',
'sxe2_cmd_chnl.c',
@@ -22,6 +26,7 @@ sources += files(
'sxe2_rx.c',
'sxe2_txrx_poll.c',
'sxe2_txrx.c',
+ 'sxe2_txrx_vec.c',
)
allow_internal_get_api = true
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
index 0066eb67b1..6c41cc83bb 100644
--- a/drivers/net/sxe2/sxe2_ethdev.c
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -107,25 +107,6 @@ static int32_t sxe2_dev_stop(struct rte_eth_dev *dev)
return ret;
}
-static int32_t sxe2_queues_start(struct rte_eth_dev *dev)
-{
- int32_t ret = 0;
- ret = sxe2_txqs_all_start(dev);
- if (ret) {
- PMD_LOG_ERR(INIT, "Failed to start tx queue.");
- goto l_end;
- }
-
- ret = sxe2_rxqs_all_start(dev);
- if (ret) {
- PMD_LOG_ERR(INIT, "Failed to start rx queue.");
- sxe2_txqs_all_stop(dev);
- }
-
-l_end:
- return ret;
-}
-
static int32_t sxe2_dev_start(struct rte_eth_dev *dev)
{
int32_t ret = 0;
@@ -158,7 +139,7 @@ static int32_t sxe2_dev_start(struct rte_eth_dev *dev)
static int32_t sxe2_dev_close(struct rte_eth_dev *dev)
{
(void)sxe2_dev_stop(dev);
-
+ (void)sxe2_queues_release(dev);
sxe2_vsi_uninit(dev);
sxe2_dev_pci_map_uinit(dev);
@@ -296,13 +277,19 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = {
.dev_close = sxe2_dev_close,
.dev_infos_get = sxe2_dev_infos_get,
+ .rx_queue_start = sxe2_rx_queue_start,
+ .rx_queue_stop = sxe2_rx_queue_stop,
+ .tx_queue_start = sxe2_tx_queue_start,
+ .tx_queue_stop = sxe2_tx_queue_stop,
.rx_queue_setup = sxe2_rx_queue_setup,
- .tx_queue_setup = sxe2_tx_queue_setup,
.rx_queue_release = sxe2_rx_queue_release,
+ .tx_queue_setup = sxe2_tx_queue_setup,
.tx_queue_release = sxe2_tx_queue_release,
.rxq_info_get = sxe2_rx_queue_info_get,
.txq_info_get = sxe2_tx_queue_info_get,
+ .rx_burst_mode_get = sxe2_rx_burst_mode_get,
+ .tx_burst_mode_get = sxe2_tx_burst_mode_get,
};
struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
@@ -771,8 +758,6 @@ static int32_t sxe2_dev_init(struct rte_eth_dev *dev,
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
sxe2_rx_mode_func_set(dev);
sxe2_tx_mode_func_set(dev);
- if (ret != 0)
- PMD_LOG_ERR(INIT, "Failed to mp init (secondary), ret=%d", ret);
goto l_end;
}
diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c
index 93f8236381..1786d6ea4f 100644
--- a/drivers/net/sxe2/sxe2_queue.c
+++ b/drivers/net/sxe2/sxe2_queue.c
@@ -5,6 +5,8 @@
#include "sxe2_ethdev.h"
#include "sxe2_queue.h"
#include "sxe2_common_log.h"
+#include "sxe2_tx.h"
+#include "sxe2_rx.h"
void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter,
struct sxe2_drv_queue_caps *q_caps)
@@ -36,3 +38,29 @@ int32_t sxe2_queues_init(struct rte_eth_dev *dev)
return ret;
}
+
+int32_t sxe2_queues_start(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+
+ ret = sxe2_txqs_all_start(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to start tx queue.");
+ goto l_end;
+ }
+
+ ret = sxe2_rxqs_all_start(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to start rx queue.");
+ sxe2_txqs_all_stop(dev);
+ }
+l_end:
+ return ret;
+}
+
+void sxe2_queues_release(struct rte_eth_dev *dev)
+{
+ sxe2_all_rxqs_release(dev);
+
+ sxe2_all_txqs_release(dev);
+}
diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h
index e587e582fa..5195e2dd16 100644
--- a/drivers/net/sxe2/sxe2_queue.h
+++ b/drivers/net/sxe2/sxe2_queue.h
@@ -188,4 +188,8 @@ void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter,
int32_t sxe2_queues_init(struct rte_eth_dev *dev);
+int32_t sxe2_queues_start(struct rte_eth_dev *dev);
+
+void sxe2_queues_release(struct rte_eth_dev *dev);
+
#endif /* __SXE2_QUEUE_H__ */
diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c
index 2531b49a52..20b4dc55db 100644
--- a/drivers/net/sxe2/sxe2_txrx.c
+++ b/drivers/net/sxe2/sxe2_txrx.c
@@ -9,12 +9,11 @@
#include <rte_memzone.h>
#include <ethdev_driver.h>
#include <unistd.h>
-
#include "sxe2_txrx.h"
#include "sxe2_txrx_common.h"
+#include "sxe2_txrx_vec.h"
#include "sxe2_txrx_poll.h"
#include "sxe2_ethdev.h"
-
#include "sxe2_common_log.h"
#include "sxe2_osal.h"
#include "sxe2_cmd_chnl.h"
@@ -22,6 +21,30 @@
#include <rte_cpuflags.h>
#endif
+int32_t __rte_cold
+sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev,
+ uint32_t *batch_flags)
+{
+ struct sxe2_tx_queue *txq;
+ int32_t ret = 0;
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i];
+ if (txq == NULL) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+ if (txq->offloads != (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
+ txq->rs_thresh < SXE2_TX_PKTS_BURST_BATCH_NUM) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ }
+ *batch_flags = SXE2_TX_MODE_SIMPLE_BATCH;
+l_end:
+ return ret;
+}
static int32_t sxe2_tx_desciptor_status(void *tx_queue, uint16_t offset)
{
struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue;
@@ -32,7 +55,6 @@ static int32_t sxe2_tx_desciptor_status(void *tx_queue, uint16_t offset)
ret = -EINVAL;
goto l_end;
}
-
desc_idx = txq->next_use + offset;
desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh);
if (desc_idx >= txq->ring_depth) {
@@ -40,19 +62,16 @@ static int32_t sxe2_tx_desciptor_status(void *tx_queue, uint16_t offset)
if (desc_idx >= txq->ring_depth)
desc_idx -= txq->ring_depth;
}
-
if (desc_idx == 0)
desc_idx = txq->rs_thresh - 1;
else
desc_idx -= 1;
-
if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) ==
(txq->desc_ring[desc_idx].wb.dd &
rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK)))
ret = RTE_ETH_TX_DESC_DONE;
else
ret = RTE_ETH_TX_DESC_FULL;
-
l_end:
return ret;
}
@@ -60,7 +79,6 @@ static int32_t sxe2_tx_desciptor_status(void *tx_queue, uint16_t offset)
static inline int32_t sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf)
{
struct rte_mbuf *m_seg = mbuf;
-
while (m_seg != NULL) {
if (m_seg->data_len == 0)
return -EINVAL;
@@ -68,6 +86,7 @@ static inline int32_t sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf)
}
return 0;
+
}
uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
@@ -97,12 +116,10 @@ uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
rte_errno = -EINVAL;
goto l_end;
}
-
if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) {
rte_errno = -EINVAL;
goto l_end;
}
-
#ifdef RTE_ETHDEV_DEBUG_TX
ret = rte_validate_tx_offload(mbuf);
if (ret != 0) {
@@ -115,14 +132,12 @@ uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
rte_errno = -ret;
goto l_end;
}
-
ret = sxe2_tx_mbuf_empty_check(mbuf);
if (ret != 0) {
rte_errno = -ret;
goto l_end;
}
}
-
l_end:
return i;
}
@@ -131,16 +146,85 @@ void sxe2_tx_mode_func_set(struct rte_eth_dev *dev)
{
struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
uint32_t tx_mode_flags = 0;
+ int32_t ret;
+ uint32_t vec_flags;
+ uint32_t batch_flags;
PMD_INIT_FUNC_TRACE();
-
- dev->tx_pkt_prepare = sxe2_tx_pkts_prepare;
- dev->tx_pkt_burst = sxe2_tx_pkts;
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ ret = sxe2_tx_vec_support_check(dev, &vec_flags);
+ if (ret == 0 &&
+ (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)) {
+ tx_mode_flags = vec_flags;
+#ifdef RTE_ARCH_X86
+ if (((tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) == 0))
+ tx_mode_flags |= SXE2_TX_MODE_VEC_SSE;
+#endif
+ if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) {
+ ret = sxe2_tx_queues_vec_prepare(dev);
+ if (ret != 0)
+ tx_mode_flags &= (~SXE2_TX_MODE_VEC_SET_MASK);
+ }
+ }
+ ret = sxe2_tx_simple_batch_support_check(dev, &batch_flags);
+ if (ret == 0 && batch_flags == SXE2_TX_MODE_SIMPLE_BATCH)
+ tx_mode_flags |= SXE2_TX_MODE_SIMPLE_BATCH;
+ }
+ if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) {
+ dev->tx_pkt_prepare = NULL;
+#ifdef RTE_ARCH_X86
+ if (tx_mode_flags & SXE2_TX_MODE_VEC_OFFLOAD) {
+ dev->tx_pkt_prepare = sxe2_tx_pkts_prepare;
+ dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse;
+ } else {
+ dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse_simple;
+ }
+#endif
+ } else {
+ if (tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH) {
+ dev->tx_pkt_prepare = NULL;
+ dev->tx_pkt_burst = sxe2_tx_pkts_simple;
+ } else {
+ dev->tx_pkt_prepare = sxe2_tx_pkts_prepare;
+ dev->tx_pkt_burst = sxe2_tx_pkts;
+ }
+ }
adapter->q_ctxt.tx_mode_flags = tx_mode_flags;
PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.",
tx_mode_flags, dev->data->port_id);
}
+static const struct {
+ eth_tx_burst_t tx_burst;
+ const char *info;
+} sxe2_tx_burst_infos[] = {
+ { sxe2_tx_pkts, "Scalar" },
+#ifdef RTE_ARCH_X86
+ { sxe2_tx_pkts_vec_sse, "Vector SSE" },
+ { sxe2_tx_pkts_vec_sse_simple, "Vector SSE Simple" },
+#endif
+};
+
+int32_t sxe2_tx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode)
+{
+ eth_tx_burst_t pkt_burst = dev->tx_pkt_burst;
+ int32_t ret = -EINVAL;
+ uint32_t i;
+ uint32_t size;
+
+ size = RTE_DIM(sxe2_tx_burst_infos);
+ for (i = 0; i < size; ++i) {
+ if (pkt_burst == sxe2_tx_burst_infos[i].tx_burst) {
+ snprintf(mode->info, sizeof(mode->info), "%s",
+ sxe2_tx_burst_infos[i].info);
+ ret = 0;
+ break;
+ }
+ }
+ return ret;
+}
+
static int32_t sxe2_rx_desciptor_status(void *rx_queue, uint16_t offset)
{
struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue;
@@ -151,22 +235,18 @@ static int32_t sxe2_rx_desciptor_status(void *rx_queue, uint16_t offset)
ret = -EINVAL;
goto l_end;
}
-
if (offset >= rxq->ring_depth - rxq->hold_num) {
ret = RTE_ETH_RX_DESC_UNAVAIL;
goto l_end;
}
-
if (rxq->processing_idx + offset >= rxq->ring_depth)
desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth];
else
desc = &rxq->desc_ring[rxq->processing_idx + offset];
-
if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK)
ret = RTE_ETH_RX_DESC_DONE;
else
ret = RTE_ETH_RX_DESC_AVAIL;
-
l_end:
PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u",
offset, ret, rxq->queue_id, rxq->port_id);
@@ -189,55 +269,86 @@ static int32_t sxe2_rx_queue_count(void *rx_queue)
else
desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM;
}
-
PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u",
done_num, rxq->queue_id, rxq->port_id);
-
return done_num;
}
-static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, uint64_t offload)
-{
- struct sxe2_rx_queue *rxq;
- bool en = false;
- uint16_t i;
-
- for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i];
- if (rxq == NULL)
- continue;
-
- if (0 != (rxq->offloads & offload)) {
- en = true;
- goto l_end;
- }
- }
-
-l_end:
- return en;
-}
-
void sxe2_rx_mode_func_set(struct rte_eth_dev *dev)
{
struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
uint32_t rx_mode_flags = 0;
+ int32_t ret;
+ uint32_t vec_flags;
PMD_INIT_FUNC_TRACE();
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ ret = sxe2_rx_vec_support_check(dev, &vec_flags);
+ if (ret == 0 &&
+ rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+ rx_mode_flags = vec_flags;
+#ifdef RTE_ARCH_X86
+ if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) &&
+ rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)
+ rx_mode_flags |= SXE2_RX_MODE_VEC_SSE;
+#endif
+ if ((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) != 0) {
+ ret = sxe2_rx_queues_vec_prepare(dev);
+ if (ret != 0)
+ rx_mode_flags &= (~SXE2_RX_MODE_VEC_SET_MASK);
+ }
+ }
+ }
+#ifdef RTE_ARCH_X86
+ if (rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) {
+ dev->rx_pkt_burst = sxe2_rx_pkts_scattered_vec_sse_offload;
+ goto l_end;
+ }
+#endif
if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT))
dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split;
else
dev->rx_pkt_burst = sxe2_rx_pkts_scattered;
-
+ goto l_end;
+l_end:
PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.",
rx_mode_flags, dev->data->port_id);
adapter->q_ctxt.rx_mode_flags = rx_mode_flags;
}
+static const struct {
+ eth_rx_burst_t rx_burst;
+ const char *info;
+} sxe2_rx_burst_infos[] = {
+ { sxe2_rx_pkts_scattered, "Scalar Scattered" },
+ { sxe2_rx_pkts_scattered_split, "Scalar Scattered split" },
+#ifdef RTE_ARCH_X86
+ { sxe2_rx_pkts_scattered_vec_sse_offload, "Vector SSE Scattered" },
+#endif
+};
+
+int32_t sxe2_rx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode)
+{
+ eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+ int32_t ret = -EINVAL;
+ uint32_t i, size;
+ size = RTE_DIM(sxe2_rx_burst_infos);
+ for (i = 0; i < size; ++i) {
+ if (pkt_burst == sxe2_rx_burst_infos[i].rx_burst) {
+ snprintf(mode->info, sizeof(mode->info), "%s",
+ sxe2_rx_burst_infos[i].info);
+ ret = 0;
+ break;
+ }
+ }
+ return ret;
+}
+
void sxe2_set_common_function(struct rte_eth_dev *dev)
{
PMD_INIT_FUNC_TRACE();
-
dev->rx_queue_count = sxe2_rx_queue_count;
dev->rx_descriptor_status = sxe2_rx_desciptor_status;
diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h
index f81c11ac56..977d8b3b1c 100644
--- a/drivers/net/sxe2/sxe2_txrx.h
+++ b/drivers/net/sxe2/sxe2_txrx.h
@@ -6,16 +6,17 @@
#define SXE2_TXRX_H
#include <ethdev_driver.h>
#include "sxe2_queue.h"
-
void sxe2_set_common_function(struct rte_eth_dev *dev);
+int32_t __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev,
+ uint32_t *batch_flags);
uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
-
void sxe2_tx_mode_func_set(struct rte_eth_dev *dev);
-
void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq);
-
void sxe2_rx_mode_func_set(struct rte_eth_dev *dev);
-
-#endif
+int32_t sxe2_tx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode);
+int32_t sxe2_rx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode);
+#endif /* SXE2_TXRX_H */
diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c
index 6d37fdef36..78c29f8584 100644
--- a/drivers/net/sxe2/sxe2_txrx_poll.c
+++ b/drivers/net/sxe2/sxe2_txrx_poll.c
@@ -369,11 +369,12 @@ uint16_t sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt
desc->read.type_cmd_off_bsz_l2t |=
rte_cpu_to_le_64(((uint64_t)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT);
}
+ goto l_end_of_tx;
l_exit_logic:
if (tx_num == 0)
goto l_end;
- goto l_end_of_tx;
+
l_end_of_tx:
SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use);
PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u",
@@ -483,6 +484,32 @@ static inline uint16_t sxe2_tx_pkts_batch(void *tx_queue,
return nb_pkts;
}
+uint16_t sxe2_tx_pkts_simple(void *tx_queue,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ uint16_t tx_done_num;
+ uint16_t tx_once_num;
+ uint16_t tx_need_num;
+ if (likely(nb_pkts <= SXE2_TX_PKTS_BURST_BATCH_NUM)) {
+ tx_done_num = sxe2_tx_pkts_batch(tx_queue,
+ tx_pkts, nb_pkts);
+ goto l_end;
+ }
+ tx_done_num = 0;
+ while (nb_pkts) {
+ tx_need_num = RTE_MIN(nb_pkts, SXE2_TX_PKTS_BURST_BATCH_NUM);
+ tx_once_num = sxe2_tx_pkts_batch(tx_queue,
+ &tx_pkts[tx_done_num],
+ tx_need_num);
+ nb_pkts -= tx_once_num;
+ tx_done_num += tx_once_num;
+ if (tx_once_num < tx_need_num)
+ break;
+ }
+l_end:
+ return tx_done_num;
+}
+
static inline void
sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, uint16_t hold_num, uint16_t rx_id)
{
diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h
index f45e33f9b7..6bb2238a2f 100644
--- a/drivers/net/sxe2/sxe2_txrx_poll.h
+++ b/drivers/net/sxe2/sxe2_txrx_poll.h
@@ -9,6 +9,10 @@
uint16_t sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t sxe2_tx_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+uint16_t sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
uint16_t sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
uint16_t sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
diff --git a/drivers/net/sxe2/sxe2_txrx_vec.c b/drivers/net/sxe2/sxe2_txrx_vec.c
new file mode 100644
index 0000000000..1e03b53d67
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_vec.c
@@ -0,0 +1,200 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include "sxe2_txrx_vec.h"
+#include "sxe2_txrx_vec_common.h"
+#include "sxe2_queue.h"
+#include "sxe2_ethdev.h"
+#include "sxe2_common_log.h"
+
+int32_t __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, uint32_t *vec_flags)
+{
+ struct sxe2_rx_queue *rxq;
+ int32_t ret = 0;
+ uint16_t i;
+
+ *vec_flags = SXE2_RX_MODE_VEC_SIMPLE;
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i];
+ if (rxq == NULL) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+ if (!rte_is_power_of_2(rxq->ring_depth)) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ if (rxq->rx_free_thresh < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC &&
+ (rxq->ring_depth % rxq->rx_free_thresh) != 0) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ if ((rxq->offloads & SXE2_RX_VEC_NO_SUPPORT_OFFLOAD) != 0) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ if ((rxq->offloads & SXE2_RX_VEC_SUPPORT_OFFLOAD) != 0)
+ *vec_flags = SXE2_RX_MODE_VEC_OFFLOAD;
+ }
+l_end:
+ return ret;
+}
+
+bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, uint64_t offload)
+{
+ struct sxe2_rx_queue *rxq;
+ bool en = false;
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i];
+ if (rxq == NULL)
+ continue;
+ if ((rxq->offloads & offload) != 0) {
+ en = true;
+ goto l_end;
+ }
+ }
+l_end:
+ return en;
+}
+
+static inline void sxe2_rx_queue_mbufs_release_vec(struct sxe2_rx_queue *rxq)
+{
+ const uint16_t mask = rxq->ring_depth - 1;
+ uint16_t i;
+
+ if (unlikely(!rxq->buffer_ring)) {
+ PMD_LOG_DEBUG(RX, "Rx queue release mbufs vec, buffer_ring if NULL."
+ "port_id:%u queue_id:%u", rxq->port_id, rxq->queue_id);
+ return;
+ }
+ if (rxq->realloc_num >= rxq->ring_depth)
+ return;
+ if (rxq->realloc_num == 0) {
+ for (i = 0; i < rxq->ring_depth; ++i) {
+ if (rxq->buffer_ring[i]) {
+ rte_pktmbuf_free_seg(rxq->buffer_ring[i]);
+ rxq->buffer_ring[i] = NULL;
+ }
+ }
+ } else {
+ for (i = rxq->processing_idx;
+ i != rxq->realloc_start;
+ i = (i + 1) & mask) {
+ if (rxq->buffer_ring[i]) {
+ rte_pktmbuf_free_seg(rxq->buffer_ring[i]);
+ rxq->buffer_ring[i] = NULL;
+ }
+ }
+ }
+ rxq->realloc_num = rxq->ring_depth;
+ memset(rxq->buffer_ring, 0, rxq->ring_depth * sizeof(rxq->buffer_ring[0]));
+}
+
+static inline void sxe2_rx_queue_vec_init(struct sxe2_rx_queue *rxq)
+{
+ uintptr_t data;
+ struct rte_mbuf mbuf_def;
+
+ memset(&mbuf_def, 0, sizeof(mbuf_def));
+ mbuf_def.buf_addr = 0;
+ mbuf_def.nb_segs = 1;
+ mbuf_def.data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf_def.port = rxq->port_id;
+ rte_mbuf_refcnt_set(&mbuf_def, 1);
+ rte_compiler_barrier();
+ data = (uintptr_t)&mbuf_def.rearm_data;
+ rxq->mbuf_init_value = *(uint64_t *)data;
+}
+
+int32_t __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev)
+{
+ struct sxe2_rx_queue *rxq = NULL;
+ int32_t ret = 0;
+ uint16_t i;
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i];
+ if (rxq == NULL) {
+ PMD_LOG_INFO(RX, "Failed to prepare rx queue, rxq[%d] is NULL", i);
+ continue;
+ }
+ rxq->ops.mbufs_release = sxe2_rx_queue_mbufs_release_vec;
+ sxe2_rx_queue_vec_init(rxq);
+ }
+ return ret;
+}
+
+int32_t __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, uint32_t *vec_flags)
+{
+ struct sxe2_tx_queue *txq;
+ int32_t ret = 0;
+ uint32_t i;
+ *vec_flags = SXE2_TX_MODE_VEC_SIMPLE;
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i];
+ if (txq == NULL) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+ if (txq->rs_thresh < SXE2_TX_RS_THRESH_MIN_VEC ||
+ txq->rs_thresh > SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ if ((txq->offloads & SXE2_TX_VEC_NO_SUPPORT_OFFLOAD) != 0) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ if ((txq->offloads & SXE2_TX_VEC_SUPPORT_OFFLOAD) != 0)
+ *vec_flags = SXE2_TX_MODE_VEC_OFFLOAD;
+ }
+l_end:
+ return ret;
+}
+
+static void sxe2_tx_queue_mbufs_release_vec(struct sxe2_tx_queue *txq)
+{
+ struct sxe2_tx_buffer *buffer;
+ uint16_t i;
+
+ if (unlikely(txq == NULL || txq->buffer_ring == NULL)) {
+ PMD_LOG_ERR(TX, "Tx release mbufs vec, invalid params.");
+ return;
+ }
+ i = txq->next_dd - (txq->rs_thresh - 1);
+ buffer = txq->buffer_ring;
+ if (txq->next_use < i) {
+ for ( ; i < txq->ring_depth; ++i) {
+ if (buffer[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(buffer[i].mbuf);
+ buffer[i].mbuf = NULL;
+ }
+ }
+ i = 0;
+ }
+ for (; i < txq->next_use; ++i) {
+ if (buffer[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(buffer[i].mbuf);
+ buffer[i].mbuf = NULL;
+ }
+ }
+}
+
+int32_t __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev)
+{
+ struct sxe2_tx_queue *txq = NULL;
+ int32_t ret = 0;
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ txq = dev->data->tx_queues[i];
+ if (txq == NULL) {
+ PMD_LOG_INFO(TX, "Failed to prepare tx queue, txq[%d] is NULL", i);
+ continue;
+ }
+ txq->ops.mbufs_release = sxe2_tx_queue_mbufs_release_vec;
+ }
+ return ret;
+}
diff --git a/drivers/net/sxe2/sxe2_txrx_vec.h b/drivers/net/sxe2/sxe2_txrx_vec.h
new file mode 100644
index 0000000000..aeb56bff1e
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_vec.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef _SXE2_TXRX_VEC_H_
+#define _SXE2_TXRX_VEC_H_
+#include <ethdev_driver.h>
+#include "sxe2_queue.h"
+
+#define SXE2_RX_MODE_VEC_SIMPLE RTE_BIT32(0)
+#define SXE2_RX_MODE_VEC_OFFLOAD RTE_BIT32(1)
+#define SXE2_RX_MODE_VEC_SSE RTE_BIT32(2)
+#define SXE2_RX_MODE_VEC_AVX2 RTE_BIT32(3)
+#define SXE2_RX_MODE_VEC_AVX512 RTE_BIT32(4)
+#define SXE2_RX_MODE_VEC_NEON RTE_BIT32(5)
+#define SXE2_RX_MODE_BATCH_ALLOC RTE_BIT32(10)
+#define SXE2_RX_MODE_VEC_SET_MASK (SXE2_RX_MODE_VEC_SIMPLE | \
+ SXE2_RX_MODE_VEC_OFFLOAD | SXE2_RX_MODE_VEC_SSE | \
+ SXE2_RX_MODE_VEC_AVX2 | SXE2_RX_MODE_VEC_AVX512 | \
+ SXE2_RX_MODE_VEC_NEON)
+#define SXE2_TX_MODE_VEC_SIMPLE RTE_BIT32(0)
+#define SXE2_TX_MODE_VEC_OFFLOAD RTE_BIT32(1)
+#define SXE2_TX_MODE_VEC_SSE RTE_BIT32(2)
+#define SXE2_TX_MODE_VEC_AVX2 RTE_BIT32(3)
+#define SXE2_TX_MODE_VEC_AVX512 RTE_BIT32(4)
+#define SXE2_TX_MODE_VEC_NEON RTE_BIT32(5)
+#define SXE2_TX_MODE_SIMPLE_BATCH RTE_BIT32(10)
+#define SXE2_TX_MODE_VEC_SET_MASK (SXE2_TX_MODE_VEC_SIMPLE | \
+ SXE2_TX_MODE_VEC_OFFLOAD | SXE2_TX_MODE_VEC_SSE | \
+ SXE2_TX_MODE_VEC_AVX2 | SXE2_TX_MODE_VEC_AVX512 | \
+ SXE2_TX_MODE_VEC_NEON)
+#define SXE2_TX_VEC_NO_SUPPORT_OFFLOAD ( \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_SECURITY | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
+#define SXE2_TX_VEC_SUPPORT_OFFLOAD ( \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
+#define SXE2_RX_VEC_NO_SUPPORT_OFFLOAD ( \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
+ RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SECURITY | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define SXE2_RX_VEC_SUPPORT_OFFLOAD ( \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
+#ifdef RTE_ARCH_X86
+uint16_t sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t sxe2_tx_pkts_vec_sse_simple(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+#endif
+int32_t __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, uint32_t *vec_flags);
+int32_t __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev);
+int32_t __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, uint32_t *vec_flags);
+bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, uint64_t offload);
+int32_t __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev);
+#endif
diff --git a/drivers/net/sxe2/sxe2_txrx_vec_common.h b/drivers/net/sxe2/sxe2_txrx_vec_common.h
new file mode 100644
index 0000000000..d608794e00
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_vec_common.h
@@ -0,0 +1,235 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_TXRX_VEC_COMMON_H__
+#define __SXE2_TXRX_VEC_COMMON_H__
+#include <rte_atomic.h>
+#ifdef PCLINT
+#include "avx_stub.h"
+#endif
+#include "sxe2_rx.h"
+#include "sxe2_queue.h"
+#include "sxe2_tx.h"
+#include "sxe2_vsi.h"
+#include "sxe2_ethdev.h"
+#define SXE2_RX_NUM_PER_LOOP_SSE 4
+#define SXE2_RX_NUM_PER_LOOP_AVX 8
+#define SXE2_RX_NUM_PER_LOOP_NEON 4
+#define SXE2_RX_REARM_THRESH_VEC 64
+#define SXE2_RX_PKTS_BURST_BATCH_NUM_VEC 32
+#define SXE2_TX_RS_THRESH_MIN_VEC 32
+#define SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC 64
+
+static __rte_always_inline void
+sxe2_tx_pkts_mbuf_fill(struct sxe2_tx_buffer *buffer,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ uint16_t i;
+ for (i = 0; i < nb_pkts; ++i)
+ buffer[i].mbuf = tx_pkts[i];
+}
+
+static __rte_always_inline int32_t
+sxe2_tx_bufs_free_vec(struct sxe2_tx_queue *txq)
+{
+ struct sxe2_tx_buffer *buffer;
+ struct rte_mbuf *mbuf;
+ struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC];
+ int32_t ret;
+ uint32_t i;
+ uint16_t rs_thresh;
+ uint16_t free_num;
+ if ((txq->desc_ring[txq->next_dd].wb.dd &
+ rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) !=
+ rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) {
+ ret = 0;
+ goto l_end;
+ }
+ rs_thresh = txq->rs_thresh;
+ buffer = &txq->buffer_ring[txq->next_dd - (rs_thresh - 1)];
+ mbuf = rte_pktmbuf_prefree_seg(buffer[0].mbuf);
+ if (likely(mbuf)) {
+ mbuf_free_arr[0] = mbuf;
+ free_num = 1;
+ for (i = 1; i < rs_thresh; ++i) {
+ mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf);
+ if (likely(mbuf)) {
+ if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) {
+ mbuf_free_arr[free_num] = mbuf;
+ free_num++;
+ } else {
+ rte_mempool_put_bulk(mbuf_free_arr[0]->pool,
+ (void *)mbuf_free_arr, free_num);
+ mbuf_free_arr[0] = mbuf;
+ free_num = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(mbuf_free_arr[0]->pool,
+ (void *)mbuf_free_arr, free_num);
+ } else {
+ for (i = 1; i < rs_thresh; ++i) {
+ mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf);
+ if (mbuf != NULL)
+ rte_mempool_put(mbuf->pool, mbuf);
+ }
+ }
+ txq->desc_free_num += rs_thresh;
+ txq->next_dd += rs_thresh;
+ if (txq->next_dd >= txq->ring_depth)
+ txq->next_dd = rs_thresh - 1;
+ ret = rs_thresh;
+l_end:
+ return ret;
+}
+
+static inline void
+sxe2_tx_desc_fill_offloads(struct rte_mbuf *mbuf, uint64_t *desc_qw1)
+{
+ uint64_t offloads = mbuf->ol_flags;
+ uint32_t desc_cmd = 0;
+ uint32_t desc_offset = 0;
+ if (offloads & RTE_MBUF_F_TX_IP_CKSUM) {
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM;
+ desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len);
+ } else if (offloads & RTE_MBUF_F_TX_IPV4) {
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4;
+ desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len);
+ } else if (offloads & RTE_MBUF_F_TX_IPV6) {
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6;
+ desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len);
+ }
+ switch (offloads & RTE_MBUF_F_TX_L4_MASK) {
+ case RTE_MBUF_F_TX_TCP_CKSUM:
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP;
+ desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len);
+ break;
+ case RTE_MBUF_F_TX_SCTP_CKSUM:
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP;
+ desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len);
+ break;
+ case RTE_MBUF_F_TX_UDP_CKSUM:
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP;
+ desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len);
+ break;
+ default:
+ break;
+ }
+ *desc_qw1 |= ((uint64_t)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT;
+ if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1;
+ *desc_qw1 |= ((uint64_t)mbuf->vlan_tci) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT;
+ }
+ *desc_qw1 |= ((uint64_t)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT;
+}
+#define SXE2_RX_UMBCAST_FLAGS_VAL_GET(_flags) \
+ (((_flags) & 0x30) >> 4)
+
+static inline void sxe2_vf_rx_vec_sw_stats_cnt(struct sxe2_rx_queue *rxq,
+ struct rte_mbuf *mbuf, uint8_t umbcast_flag)
+{
+ if (rxq->vsi->adapter->devargs.sw_stats_en) {
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes,
+ mbuf->pkt_len + RTE_ETHER_CRC_LEN, rte_memory_order_relaxed);
+ switch (SXE2_RX_UMBCAST_FLAGS_VAL_GET(umbcast_flag)) {
+ case SXE2_RX_DESC_STATUS_UNICAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ case SXE2_RX_DESC_STATUS_MUTICAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ case SXE2_RX_DESC_STATUS_BOARDCAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+static inline uint16_t
+sxe2_rx_pkts_refactor(struct sxe2_rx_queue *rxq,
+ struct rte_mbuf **mbuf_bufs, uint16_t mbuf_num,
+ uint8_t *split_rxe_flags, uint8_t *umbcast_flags)
+{
+ struct rte_mbuf *done_pkts[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0};
+ struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+ struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+ struct rte_mbuf *tmp_seg;
+ uint16_t done_num, buf_idx;
+ done_num = 0;
+ for (buf_idx = 0; buf_idx < mbuf_num; buf_idx++) {
+ if (last_seg) {
+ last_seg->next = mbuf_bufs[buf_idx];
+ mbuf_bufs[buf_idx]->data_len += rxq->crc_len;
+ first_seg->nb_segs++;
+ first_seg->pkt_len += mbuf_bufs[buf_idx]->data_len;
+ last_seg = last_seg->next;
+ if (split_rxe_flags[buf_idx] == 0) {
+ first_seg->hash = last_seg->hash;
+ first_seg->vlan_tci = last_seg->vlan_tci;
+ first_seg->ol_flags = last_seg->ol_flags;
+ first_seg->pkt_len -= rxq->crc_len;
+ if (last_seg->data_len > rxq->crc_len) {
+ last_seg->data_len -= rxq->crc_len;
+ } else {
+ tmp_seg = first_seg;
+ first_seg->nb_segs--;
+ while (tmp_seg->next != last_seg)
+ tmp_seg = tmp_seg->next;
+ tmp_seg->data_len -= (rxq->crc_len - last_seg->data_len);
+ tmp_seg->next = NULL;
+ rte_pktmbuf_free_seg(last_seg);
+ last_seg = NULL;
+ }
+ done_pkts[done_num++] = first_seg;
+ sxe2_vf_rx_vec_sw_stats_cnt(rxq, first_seg, umbcast_flags[buf_idx]);
+ first_seg = NULL;
+ last_seg = NULL;
+ } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) {
+ continue;
+ } else {
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes,
+ first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN,
+ rte_memory_order_relaxed);
+ rte_pktmbuf_free(first_seg);
+ first_seg = NULL;
+ last_seg = NULL;
+ continue;
+ }
+ } else {
+ if (split_rxe_flags[buf_idx] == 0) {
+ done_pkts[done_num++] = mbuf_bufs[buf_idx];
+ sxe2_vf_rx_vec_sw_stats_cnt(rxq, mbuf_bufs[buf_idx],
+ umbcast_flags[buf_idx]);
+ continue;
+ } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) {
+ first_seg = mbuf_bufs[buf_idx];
+ last_seg = first_seg;
+ mbuf_bufs[buf_idx]->data_len += rxq->crc_len;
+ mbuf_bufs[buf_idx]->pkt_len += rxq->crc_len;
+ } else {
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes,
+ mbuf_bufs[buf_idx]->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN,
+ rte_memory_order_relaxed);
+ rte_pktmbuf_free_seg(mbuf_bufs[buf_idx]);
+ continue;
+ }
+ }
+ }
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
+ rte_memcpy(mbuf_bufs, done_pkts, done_num * (sizeof(struct rte_mbuf *)));
+ return done_num;
+}
+#endif
diff --git a/drivers/net/sxe2/sxe2_txrx_vec_sse.c b/drivers/net/sxe2/sxe2_txrx_vec_sse.c
new file mode 100644
index 0000000000..f6e3f45937
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_vec_sse.c
@@ -0,0 +1,549 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <ethdev_driver.h>
+#include <rte_bitops.h>
+#include <rte_malloc.h>
+#include <rte_mempool.h>
+#include <rte_vect.h>
+#include "rte_common.h"
+#include "sxe2_ethdev.h"
+#include "sxe2_common_log.h"
+#include "sxe2_queue.h"
+#include "sxe2_txrx_vec.h"
+#include "sxe2_txrx_vec_common.h"
+#include "sxe2_vsi.h"
+
+static __rte_always_inline void
+sxe2_tx_desc_fill_one_sse(volatile union sxe2_tx_data_desc *desc,
+ struct rte_mbuf *pkt,
+ uint64_t desc_cmd, bool with_offloads)
+{
+ __m128i data_desc;
+ uint64_t desc_qw1;
+ uint32_t desc_offset;
+ desc_qw1 = (SXE2_TX_DESC_DTYPE_DATA |
+ ((uint64_t)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT |
+ ((uint64_t)pkt->data_len) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT);
+ desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL(pkt->l2_len);
+ desc_qw1 |= ((uint64_t)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT;
+ if (with_offloads)
+ sxe2_tx_desc_fill_offloads(pkt, &desc_qw1);
+ data_desc = _mm_set_epi64x(desc_qw1, rte_pktmbuf_iova(pkt));
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, desc), data_desc);
+}
+
+static __rte_always_inline uint16_t
+sxe2_tx_pkts_vec_sse_batch(struct sxe2_tx_queue *txq,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts, bool with_offloads)
+{
+ volatile union sxe2_tx_data_desc *desc;
+ struct sxe2_tx_buffer *buffer;
+ uint16_t next_use;
+ uint16_t res_num;
+ uint16_t tx_num;
+ uint16_t i;
+ if (txq->desc_free_num < txq->free_thresh)
+ (void)sxe2_tx_bufs_free_vec(txq);
+ nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts);
+ if (unlikely(nb_pkts == 0)) {
+ PMD_LOG_DEBUG(TX, "Tx pkts sse batch: may not enough free desc, "
+ "free_desc=%u, need_tx_pkts=%u",
+ txq->desc_free_num, nb_pkts);
+ goto l_end;
+ }
+ tx_num = nb_pkts;
+ next_use = txq->next_use;
+ desc = &txq->desc_ring[next_use];
+ buffer = &txq->buffer_ring[next_use];
+ txq->desc_free_num -= nb_pkts;
+ res_num = txq->ring_depth - txq->next_use;
+ if (tx_num >= res_num) {
+ sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, res_num);
+ for (i = 0; i < res_num - 1; ++i, ++tx_pkts, ++desc) {
+ sxe2_tx_desc_fill_one_sse(desc, *tx_pkts,
+ SXE2_TX_DATA_DESC_CMD_EOP,
+ with_offloads);
+ }
+ sxe2_tx_desc_fill_one_sse(desc, *tx_pkts++,
+ (SXE2_TX_DATA_DESC_CMD_EOP | SXE2_TX_DATA_DESC_CMD_RS),
+ with_offloads);
+ tx_num -= res_num;
+ next_use = 0;
+ txq->next_rs = txq->rs_thresh - 1;
+ desc = &txq->desc_ring[next_use];
+ buffer = &txq->buffer_ring[next_use];
+ }
+ sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, tx_num);
+ for (i = 0; i < tx_num; ++i, ++tx_pkts, ++desc) {
+ sxe2_tx_desc_fill_one_sse(desc, *tx_pkts,
+ SXE2_TX_DATA_DESC_CMD_EOP,
+ with_offloads);
+ }
+ next_use += tx_num;
+ if (next_use > txq->next_rs) {
+ txq->desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |=
+ rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK);
+ txq->next_rs += txq->rs_thresh;
+ }
+ txq->next_use = next_use;
+ SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use);
+ PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u",
+ txq->port_id, txq->queue_id, next_use, nb_pkts);
+l_end:
+ return nb_pkts;
+}
+
+static __rte_always_inline uint16_t
+sxe2_tx_pkts_vec_sse_common(struct sxe2_tx_queue *txq,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts, bool with_offloads)
+{
+ uint16_t tx_done_num = 0;
+ uint16_t tx_once_num;
+ uint16_t tx_need_num;
+ while (nb_pkts) {
+ tx_need_num = RTE_MIN(nb_pkts, txq->rs_thresh);
+ tx_once_num = sxe2_tx_pkts_vec_sse_batch(txq,
+ tx_pkts + tx_done_num,
+ tx_need_num, with_offloads);
+ nb_pkts -= tx_once_num;
+ tx_done_num += tx_once_num;
+ if (tx_once_num < tx_need_num)
+ break;
+ }
+ return tx_done_num;
+}
+
+uint16_t sxe2_tx_pkts_vec_sse_simple(void *tx_queue,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue,
+ tx_pkts, nb_pkts, false);
+}
+uint16_t sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue,
+ tx_pkts, nb_pkts, true);
+}
+
+static inline void sxe2_rx_queue_rearm_sse(struct sxe2_rx_queue *rxq)
+{
+ volatile union sxe2_rx_desc *desc;
+ struct rte_mbuf **buffer;
+ struct rte_mbuf *mbuf0, *mbuf1;
+ __m128i dma_addr0, dma_addr1;
+ __m128i virt_addr0, virt_addr1;
+ __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM,
+ RTE_PKTMBUF_HEADROOM);
+ int32_t ret;
+ uint16_t i;
+ uint16_t new_tail;
+
+ buffer = &rxq->buffer_ring[rxq->realloc_start];
+ desc = &rxq->desc_ring[rxq->realloc_start];
+ ret = rte_mempool_get_bulk(rxq->mb_pool, (void *)buffer,
+ SXE2_RX_REARM_THRESH_VEC);
+ if (ret != 0) {
+ PMD_LOG_INFO(RX, "Rx mbuf vec alloc failed port_id=%u "
+ "queue_id=%u", rxq->port_id, rxq->queue_id);
+ if ((rxq->realloc_num + SXE2_RX_REARM_THRESH_VEC) >= rxq->ring_depth) {
+ dma_addr0 = _mm_setzero_si128();
+ for (i = 0; i < SXE2_RX_NUM_PER_LOOP_SSE; ++i) {
+ buffer[i] = &rxq->fake_mbuf;
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc[i].read),
+ dma_addr0);
+ }
+ }
+ rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed +=
+ SXE2_RX_REARM_THRESH_VEC;
+ goto l_end;
+ }
+ for (i = 0; i < SXE2_RX_REARM_THRESH_VEC; i += 2, buffer += 2) {
+ mbuf0 = buffer[0];
+ mbuf1 = buffer[1];
+#if RTE_IOVA_IN_MBUF
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
+ offsetof(struct rte_mbuf, buf_addr) + 8);
+#endif
+ virt_addr0 = _mm_loadu_si128((__m128i *)&mbuf0->buf_addr);
+ virt_addr1 = _mm_loadu_si128((__m128i *)&mbuf1->buf_addr);
+#if RTE_IOVA_IN_MBUF
+ dma_addr0 = _mm_unpackhi_epi64(virt_addr0, virt_addr0);
+ dma_addr1 = _mm_unpackhi_epi64(virt_addr1, virt_addr1);
+#else
+ dma_addr0 = _mm_unpacklo_epi64(virt_addr0, virt_addr0);
+ dma_addr1 = _mm_unpacklo_epi64(virt_addr1, virt_addr1);
+#endif
+ dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room);
+ dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room);
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr0);
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr1);
+ }
+ rxq->realloc_start += SXE2_RX_REARM_THRESH_VEC;
+ if (rxq->realloc_start >= rxq->ring_depth)
+ rxq->realloc_start = 0;
+ rxq->realloc_num -= SXE2_RX_REARM_THRESH_VEC;
+ new_tail = (rxq->realloc_start == 0) ?
+ (rxq->ring_depth - 1) : (rxq->realloc_start - 1);
+ SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, new_tail);
+l_end:
+ return;
+}
+
+static __rte_always_inline __m128i
+sxe2_rx_desc_fnav_flags_sse(__m128i descs_arr[4])
+{
+ __m128i descs_tmp1, descs_tmp2;
+ __m128i descs_fnav_vld;
+ __m128i v_zeros, v_ffff, v_u32_one;
+ __m128i m_flags;
+ const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID);
+ descs_tmp1 = _mm_unpacklo_epi32(descs_arr[0], descs_arr[1]);
+ descs_tmp2 = _mm_unpacklo_epi32(descs_arr[2], descs_arr[3]);
+ descs_fnav_vld = _mm_unpacklo_epi64(descs_tmp1, descs_tmp2);
+ descs_fnav_vld = _mm_slli_epi32(descs_fnav_vld, 26);
+ descs_fnav_vld = _mm_srli_epi32(descs_fnav_vld, 31);
+ v_zeros = _mm_setzero_si128();
+ v_ffff = _mm_cmpeq_epi32(v_zeros, v_zeros);
+ v_u32_one = _mm_srli_epi32(v_ffff, 31);
+ m_flags = _mm_cmpeq_epi32(descs_fnav_vld, v_u32_one);
+ m_flags = _mm_and_si128(m_flags, fdir_flags);
+ return m_flags;
+}
+
+static __rte_always_inline void
+sxe2_rx_desc_offloads_para_fill_sse(struct sxe2_rx_queue *rxq,
+ volatile union sxe2_rx_desc *desc __rte_unused,
+ __m128i descs_arr[4],
+ struct rte_mbuf **rx_pkts)
+{
+ const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_init_value);
+ __m128i rearm_arr[4];
+ __m128i tmp_desc_lo, tmp_desc_hi, flags, tmp_flags;
+ const __m128i desc_flags_mask = _mm_set_epi32(0x00001C04, 0x00001C04,
+ 0x00001C04, 0x00001C04);
+ const __m128i desc_flags_rss_mask = _mm_set_epi32(0x20000000, 0x20000000,
+ 0x20000000, 0x20000000);
+ const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, RTE_MBUF_F_RX_VLAN |
+ RTE_MBUF_F_RX_VLAN_STRIPPED,
+ 0, 0, 0, 0);
+ const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, RTE_MBUF_F_RX_RSS_HASH,
+ 0, 0, 0, 0);
+ const __m128i cksum_flags =
+ _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
+ ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+ RTE_MBUF_F_RX_L4_CKSUM_BAD |
+ RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1),
+ ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+ RTE_MBUF_F_RX_L4_CKSUM_BAD |
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1),
+ ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+ RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1),
+ ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1),
+ ((RTE_MBUF_F_RX_L4_CKSUM_BAD |
+ RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1),
+ ((RTE_MBUF_F_RX_L4_CKSUM_BAD |
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1),
+ ((RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+ RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1),
+ ((RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1));
+ const __m128i cksum_mask =
+ _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+ RTE_MBUF_F_RX_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+ RTE_MBUF_F_RX_IP_CKSUM_MASK |
+ RTE_MBUF_F_RX_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+ RTE_MBUF_F_RX_IP_CKSUM_MASK |
+ RTE_MBUF_F_RX_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+ RTE_MBUF_F_RX_IP_CKSUM_MASK |
+ RTE_MBUF_F_RX_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
+ const __m128i vlan_mask =
+ _mm_set_epi32(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+ RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+ RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN |
+ RTE_MBUF_F_RX_VLAN_STRIPPED,
+ RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
+ flags = _mm_unpackhi_epi32(descs_arr[0], descs_arr[1]);
+ tmp_flags = _mm_unpackhi_epi32(descs_arr[2], descs_arr[3]);
+ tmp_desc_lo = _mm_unpacklo_epi64(flags, tmp_flags);
+ tmp_desc_hi = _mm_unpackhi_epi64(flags, tmp_flags);
+ tmp_desc_lo = _mm_and_si128(tmp_desc_lo, desc_flags_mask);
+ tmp_desc_hi = _mm_and_si128(tmp_desc_hi, desc_flags_rss_mask);
+ tmp_flags = _mm_shuffle_epi8(vlan_flags, tmp_desc_lo);
+ flags = _mm_and_si128(tmp_flags, vlan_mask);
+ tmp_desc_lo = _mm_srli_epi32(tmp_desc_lo, 10);
+ tmp_flags = _mm_shuffle_epi8(cksum_flags, tmp_desc_lo);
+ tmp_flags = _mm_slli_epi32(tmp_flags, 1);
+ tmp_flags = _mm_and_si128(tmp_flags, cksum_mask);
+ flags = _mm_or_si128(flags, tmp_flags);
+ tmp_desc_hi = _mm_srli_epi32(tmp_desc_hi, 27);
+ tmp_flags = _mm_shuffle_epi8(rss_flags, tmp_desc_hi);
+ flags = _mm_or_si128(flags, tmp_flags);
+#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC
+ if (rxq->fnav_enable) {
+ __m128i tmp_fnav_flags = sxe2_rx_desc_fnav_flags_sse(descs_arr);
+ flags = _mm_or_si128(flags, tmp_fnav_flags);
+ rx_pkts[0]->hash.fdir.hi = desc[0].wb.fd_filter_id;
+ rx_pkts[1]->hash.fdir.hi = desc[1].wb.fd_filter_id;
+ rx_pkts[2]->hash.fdir.hi = desc[2].wb.fd_filter_id;
+ rx_pkts[3]->hash.fdir.hi = desc[3].wb.fd_filter_id;
+ }
+#endif
+ rearm_arr[0] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 8), 0x30);
+ rearm_arr[1] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 4), 0x30);
+ rearm_arr[2] = _mm_blend_epi16(mbuf_init, flags, 0x30);
+ rearm_arr[3] = _mm_blend_epi16(mbuf_init, _mm_srli_si128(flags, 4), 0x30);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
+ offsetof(struct rte_mbuf, rearm_data) + 8);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
+ RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16));
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[0]->rearm_data), rearm_arr[0]);
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[1]->rearm_data), rearm_arr[1]);
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[2]->rearm_data), rearm_arr[2]);
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[3]->rearm_data), rearm_arr[3]);
+}
+
+static inline uint16_t
+sxe2_rx_pkts_common_vec_sse(struct sxe2_rx_queue *rxq,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_rxe_flags,
+ uint8_t *umbcast_flags)
+{
+ volatile union sxe2_rx_desc *desc;
+ struct rte_mbuf **buffer;
+ __m128i descs_arr[SXE2_RX_NUM_PER_LOOP_SSE];
+ __m128i mbuf_arr[SXE2_RX_NUM_PER_LOOP_SSE];
+ __m128i staterr, sterr_tmp1, sterr_tmp2;
+ __m128i pmbuf0;
+ __m128i ptype_all;
+#ifdef RTE_ARCH_X86_64
+ __m128i pmbuf1;
+#endif
+ uint32_t i;
+ uint32_t bit_num;
+ uint16_t done_num = 0;
+ const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+ const __m128i crc_adjust =
+ _mm_set_epi16(0, 0, 0,
+ -rxq->crc_len,
+ 0, -rxq->crc_len,
+ 0, 0);
+ const __m128i rvp_shuf_mask =
+ _mm_set_epi8(7, 6, 5, 4,
+ 3, 2,
+ 13, 12,
+ 0XFF, 0xFF, 13, 12,
+ 0xFF, 0xFF, 0xFF, 0xFF);
+ const __m128i dd_mask = _mm_set_epi64x(0x0000000100000001LL,
+ 0x0000000100000001LL);
+ const __m128i eop_mask = _mm_slli_epi32(dd_mask,
+ SXE2_RX_DESC_STATUS_EOP_SHIFT);
+ const __m128i rxe_mask = _mm_set_epi64x(0x0000208000002080LL,
+ 0x0000208000002080LL);
+ const __m128i eop_shuf_mask = _mm_set_epi8(0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0x04, 0x0C,
+ 0x00, 0x08);
+ const __m128i ptype_mask = _mm_set_epi16(SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0,
+ SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0,
+ SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0,
+ SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
+ desc = &rxq->desc_ring[rxq->processing_idx];
+ rte_prefetch0(desc);
+ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, SXE2_RX_NUM_PER_LOOP_SSE);
+ if (rxq->realloc_num > SXE2_RX_REARM_THRESH_VEC)
+ sxe2_rx_queue_rearm_sse(rxq);
+ if ((rte_le_to_cpu_64(desc->wb.status_err_ptype_len) &
+ SXE2_RX_DESC_STATUS_DD_MASK) == 0)
+ goto l_end;
+ buffer = &rxq->buffer_ring[rxq->processing_idx];
+ for (i = 0; i < nb_pkts; i += SXE2_RX_NUM_PER_LOOP_SSE,
+ desc += SXE2_RX_NUM_PER_LOOP_SSE) {
+ pmbuf0 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i]));
+ descs_arr[3] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 3));
+ rte_compiler_barrier();
+ _mm_storeu_si128((__m128i *)&rx_pkts[i], pmbuf0);
+#ifdef RTE_ARCH_X86_64
+ pmbuf1 = _mm_loadu_si128((__m128i *)&buffer[i + 2]);
+#endif
+ descs_arr[2] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 2));
+ rte_compiler_barrier();
+ descs_arr[1] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 1));
+ rte_compiler_barrier();
+ descs_arr[0] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc));
+#ifdef RTE_ARCH_X86_64
+ _mm_storeu_si128((__m128i *)&rx_pkts[i + 2], pmbuf1);
+#endif
+ if (split_rxe_flags) {
+ rte_mbuf_prefetch_part2(rx_pkts[i]);
+ rte_mbuf_prefetch_part2(rx_pkts[i + 1]);
+ rte_mbuf_prefetch_part2(rx_pkts[i + 2]);
+ rte_mbuf_prefetch_part2(rx_pkts[i + 3]);
+ }
+ rte_compiler_barrier();
+ mbuf_arr[3] = _mm_shuffle_epi8(descs_arr[3], rvp_shuf_mask);
+ mbuf_arr[2] = _mm_shuffle_epi8(descs_arr[2], rvp_shuf_mask);
+ mbuf_arr[1] = _mm_shuffle_epi8(descs_arr[1], rvp_shuf_mask);
+ mbuf_arr[0] = _mm_shuffle_epi8(descs_arr[0], rvp_shuf_mask);
+ sterr_tmp2 = _mm_unpackhi_epi32(descs_arr[3], descs_arr[2]);
+ sterr_tmp1 = _mm_unpackhi_epi32(descs_arr[1], descs_arr[0]);
+ sxe2_rx_desc_offloads_para_fill_sse(rxq, desc, descs_arr, rx_pkts);
+ mbuf_arr[3] = _mm_add_epi16(mbuf_arr[3], crc_adjust);
+ mbuf_arr[2] = _mm_add_epi16(mbuf_arr[2], crc_adjust);
+ mbuf_arr[1] = _mm_add_epi16(mbuf_arr[1], crc_adjust);
+ mbuf_arr[0] = _mm_add_epi16(mbuf_arr[0], crc_adjust);
+ staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2);
+ ptype_all = _mm_and_si128(staterr, ptype_mask);
+ _mm_storeu_si128((void *)&rx_pkts[i + 3]->rx_descriptor_fields1,
+ mbuf_arr[3]);
+ _mm_storeu_si128((void *)&rx_pkts[i + 2]->rx_descriptor_fields1,
+ mbuf_arr[2]);
+ if (umbcast_flags != NULL) {
+ const __m128i umbcast_mask =
+ _mm_set_epi32(SXE2_RX_DESC_STATUS_UMBCAST_MASK,
+ SXE2_RX_DESC_STATUS_UMBCAST_MASK,
+ SXE2_RX_DESC_STATUS_UMBCAST_MASK,
+ SXE2_RX_DESC_STATUS_UMBCAST_MASK);
+ const __m128i umbcast_shuf_mask =
+ _mm_set_epi8(0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0x07, 0x0F,
+ 0x03, 0x0B);
+ __m128i umbcast_bits = _mm_and_si128(staterr, umbcast_mask);
+ umbcast_bits = _mm_shuffle_epi8(umbcast_bits, umbcast_shuf_mask);
+ *(int32_t *)umbcast_flags = _mm_cvtsi128_si32(umbcast_bits);
+ umbcast_flags += SXE2_RX_NUM_PER_LOOP_SSE;
+ }
+ if (split_rxe_flags != NULL) {
+ __m128i eop_bits = _mm_andnot_si128(staterr, eop_mask);
+ __m128i rxe_bits = _mm_and_si128(staterr, rxe_mask);
+ rxe_bits = _mm_srli_epi32(rxe_bits, 7);
+ eop_bits = _mm_or_si128(eop_bits, rxe_bits);
+ eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask);
+ *(int32_t *)split_rxe_flags = _mm_cvtsi128_si32(eop_bits);
+ split_rxe_flags += SXE2_RX_NUM_PER_LOOP_SSE;
+ }
+ staterr = _mm_and_si128(staterr, dd_mask);
+ staterr = _mm_packs_epi32(staterr, _mm_setzero_si128());
+ _mm_storeu_si128((void *)&rx_pkts[i + 1]->rx_descriptor_fields1,
+ mbuf_arr[1]);
+ _mm_storeu_si128((void *)&rx_pkts[i]->rx_descriptor_fields1,
+ mbuf_arr[0]);
+ rx_pkts[i + 3]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 3)];
+ rx_pkts[i + 2]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 7)];
+ rx_pkts[i + 1]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 1)];
+ rx_pkts[i]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 5)];
+ bit_num = rte_popcount64(_mm_cvtsi128_si64(staterr));
+ done_num += bit_num;
+ if (likely(bit_num != SXE2_RX_NUM_PER_LOOP_SSE))
+ break;
+ }
+ rxq->processing_idx += done_num;
+ rxq->processing_idx &= (rxq->ring_depth - 1);
+ rxq->realloc_num += done_num;
+ PMD_LOG_DEBUG(RX, "port_id=%u queue_id=%u last_id=%u recv_pkts=%d",
+ rxq->port_id, rxq->queue_id, rxq->processing_idx, done_num);
+l_end:
+ return done_num;
+}
+
+static __rte_always_inline uint16_t
+sxe2_rx_pkts_scattered_batch_vec_sse(struct sxe2_rx_queue *rxq,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ const uint64_t *split_rxe_flags64;
+ uint8_t split_rxe_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0};
+ uint8_t umbcast_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0};
+ uint16_t rx_done_num;
+ uint16_t rx_pkt_done_num;
+ rx_pkt_done_num = 0;
+
+ if (rxq->vsi->adapter->devargs.sw_stats_en) {
+ rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts,
+ nb_pkts, split_rxe_flags, umbcast_flags);
+ } else {
+ rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts,
+ nb_pkts, split_rxe_flags, NULL);
+ }
+ if (rx_done_num == 0)
+ goto l_end;
+ if (!rxq->vsi->adapter->devargs.sw_stats_en) {
+ split_rxe_flags64 = (uint64_t *)split_rxe_flags;
+ if (rxq->pkt_first_seg == NULL &&
+ split_rxe_flags64[0] == 0 &&
+ split_rxe_flags64[1] == 0 &&
+ split_rxe_flags64[2] == 0 &&
+ split_rxe_flags64[3] == 0) {
+ rx_pkt_done_num = rx_done_num;
+ goto l_end;
+ }
+ if (rxq->pkt_first_seg == NULL) {
+ while (rx_pkt_done_num < rx_done_num &&
+ split_rxe_flags[rx_pkt_done_num] == 0)
+ rx_pkt_done_num++;
+ if (rx_pkt_done_num == rx_done_num)
+ goto l_end;
+ rxq->pkt_first_seg = rx_pkts[rx_pkt_done_num];
+ }
+ }
+ rx_pkt_done_num += sxe2_rx_pkts_refactor(rxq, &rx_pkts[rx_pkt_done_num],
+ rx_done_num - rx_pkt_done_num, &split_rxe_flags[rx_pkt_done_num],
+ &umbcast_flags[rx_pkt_done_num]);
+l_end:
+ return rx_pkt_done_num;
+}
+
+uint16_t sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ uint16_t done_num = 0;
+ uint16_t once_num;
+
+ while (nb_pkts > SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) {
+ once_num =
+ sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue,
+ rx_pkts + done_num,
+ SXE2_RX_PKTS_BURST_BATCH_NUM_VEC);
+ done_num += once_num;
+ nb_pkts -= once_num;
+ if (once_num < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC)
+ goto l_end;
+ }
+ done_num +=
+ sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue,
+ rx_pkts + done_num, nb_pkts);
+l_end:
+ return done_num;
+}
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v14 11/11] net/sxe2: implement Tx done cleanup
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (9 preceding siblings ...)
2026-05-16 2:55 ` [PATCH v14 10/11] net/sxe2: add vectorized " liujie5
@ 2026-05-16 2:55 ` liujie5
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
10 siblings, 1 reply; 30+ messages in thread
From: liujie5 @ 2026-05-16 2:55 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
This patch implements the 'tx_done_cleanup' ethdev ops in the sxe2
PMD. This interface allows applications to explicitly request the
driver to release mbufs that have been transmitted and are no longer
needed by the hardware.
The implementation iterates through the Tx ring, checking the status
of the descriptors starting from the last cleaned tail. It releases
the corresponding mbufs back to the mempool until either the requested
number of packets are freed or no more completed descriptors are
found.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/net/sxe2/sxe2_ethdev.c | 1 +
drivers/net/sxe2/sxe2_txrx.h | 1 +
drivers/net/sxe2/sxe2_txrx_poll.c | 101 +++++++++++++++++++++++++++++-
3 files changed, 102 insertions(+), 1 deletion(-)
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
index 6c41cc83bb..56a1cbee7b 100644
--- a/drivers/net/sxe2/sxe2_ethdev.c
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -290,6 +290,7 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = {
.txq_info_get = sxe2_tx_queue_info_get,
.rx_burst_mode_get = sxe2_rx_burst_mode_get,
.tx_burst_mode_get = sxe2_tx_burst_mode_get,
+ .tx_done_cleanup = sxe2_tx_done_cleanup,
};
struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h
index 977d8b3b1c..cb4c7c3b9e 100644
--- a/drivers/net/sxe2/sxe2_txrx.h
+++ b/drivers/net/sxe2/sxe2_txrx.h
@@ -12,6 +12,7 @@ int32_t __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev,
uint32_t *batch_flags);
uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+int32_t sxe2_tx_done_cleanup(void *txq, uint32_t free_cnt);
void sxe2_tx_mode_func_set(struct rte_eth_dev *dev);
void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq);
void sxe2_rx_mode_func_set(struct rte_eth_dev *dev);
diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c
index 78c29f8584..81d56e0ef4 100644
--- a/drivers/net/sxe2/sxe2_txrx_poll.c
+++ b/drivers/net/sxe2/sxe2_txrx_poll.c
@@ -8,12 +8,13 @@
#include <rte_malloc.h>
#include <rte_memzone.h>
#include <ethdev_driver.h>
-#include <unistd.h>
#include "sxe2_osal.h"
#include "sxe2_txrx_common.h"
+#include "sxe2_txrx_vec_common.h"
#include "sxe2_txrx_poll.h"
#include "sxe2_txrx.h"
+#include "sxe2_txrx_vec.h"
#include "sxe2_queue.h"
#include "sxe2_ethdev.h"
#include "sxe2_common_log.h"
@@ -118,6 +119,104 @@ static inline int32_t sxe2_tx_cleanup(struct sxe2_tx_queue *txq)
return ret;
}
+static int32_t sxe2_tx_done_cleanup_simple(struct sxe2_tx_queue *txq, uint32_t free_cnt)
+{
+ uint32_t free_cnt_align;
+ uint32_t free_cnt_once;
+ uint32_t i;
+
+ if (free_cnt == 0 || free_cnt > txq->ring_depth)
+ free_cnt = txq->ring_depth;
+
+ free_cnt_align = free_cnt - (free_cnt % txq->rs_thresh);
+ for (i = 0; i < free_cnt_align; i += free_cnt_once) {
+ if ((txq->ring_depth - txq->desc_free_num) < txq->rs_thresh)
+ break;
+
+ free_cnt_once = sxe2_tx_bufs_free(txq);
+ if (free_cnt_once == 0)
+ break;
+ }
+
+ return i;
+}
+
+static int32_t sxe2_tx_done_cleanup_normal(struct sxe2_tx_queue *txq, uint32_t free_cnt)
+{
+ struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring;
+ int32_t ret;
+ uint16_t clean_last_idx, clean_idx;
+ uint16_t clean_last, clean_once;
+ uint16_t pkt_cnt, i;
+
+ if (txq->desc_free_num == 0 && sxe2_tx_cleanup(txq) != 0) {
+ ret = 0;
+ goto l_end;
+ }
+
+ if (free_cnt == 0)
+ free_cnt = txq->ring_depth;
+
+ clean_last_idx = txq->next_use;
+ clean_idx = buffer_ring[clean_last_idx].next_id;
+ clean_once = txq->desc_free_num;
+ clean_last = txq->desc_free_num;
+
+ for (pkt_cnt = 0; pkt_cnt < free_cnt;) {
+ for (i = 0; ((i < clean_once) &&
+ (pkt_cnt < free_cnt) &&
+ clean_idx != clean_last_idx); ++i) {
+ if (buffer_ring[clean_idx].mbuf != NULL) {
+ rte_pktmbuf_free_seg(buffer_ring[clean_idx].mbuf);
+ buffer_ring[clean_idx].mbuf = NULL;
+ if (buffer_ring[clean_idx].last_id == clean_idx)
+ pkt_cnt++;
+ }
+ clean_idx = buffer_ring[clean_idx].next_id;
+ }
+
+ if ((txq->rs_thresh > (txq->ring_depth - txq->desc_free_num)) ||
+ clean_idx == clean_last_idx)
+ break;
+
+ if (pkt_cnt < free_cnt) {
+ if (sxe2_tx_cleanup(txq) != 0)
+ break;
+
+ clean_once = txq->desc_free_num - clean_last;
+ clean_last = txq->desc_free_num;
+ }
+ }
+
+ ret = pkt_cnt;
+l_end:
+ return ret;
+}
+
+int32_t sxe2_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
+{
+ struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue;
+ struct sxe2_adapter *adapter = txq->vsi->adapter;
+ int32_t ret;
+
+ if (txq == NULL) {
+ ret = 0;
+ goto l_end;
+ }
+ if (adapter->q_ctxt.tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK)
+ ret = -ENOTSUP;
+ else if (adapter->q_ctxt.tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH)
+ ret = sxe2_tx_done_cleanup_simple(txq, free_cnt);
+ else
+ ret = sxe2_tx_done_cleanup_normal(txq, free_cnt);
+
+ PMD_LOG_DEBUG(TX, "TX cleanup done desc queue_id=%u free_cnt=%d.",
+ txq->queue_id, ret);
+
+l_end:
+ return ret;
+}
+
static __rte_always_inline uint16_t
sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt)
{
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback
2026-05-16 2:55 ` [PATCH v14 11/11] net/sxe2: implement Tx done cleanup liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 01/11] mailmap: add Jie Liu liujie5
` (10 more replies)
0 siblings, 11 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
This patch set addresses the feedback received on the v10 submission
for the sxe2 PMD. The primary focus is on fixing vector path selection,
ensuring memory safety during mbuf initialization, and cleaning up
redundant logic in the configuration functions.
v15 Changes:
- Fixed vector Rx burst function being overwritten by scalar selection.
- Refactored Rx/Tx mode set functions to seed flags from caps first,
eliminating tautological checks.
- Added memset for mbuf_def in vector init to avoid uninitialized reads.
- Converted pci_map_addr_info to designated initializers.
- Removed dead Windows-only code in meson.build.
- Added NULL checks for mbuf free for driver-wide consistency.
- Updated burst_mode_get to accurately report AVX paths.
- Adjusted SXE2_ETH_OVERHEAD to match actual VLAN capabilities.
Jie Liu (11):
mailmap: add Jie Liu
doc: add sxe2 guide and release notes
common/sxe2: add sxe2 basic structures
drivers: add base driver skeleton
drivers: add base driver probe skeleton
drivers: support PCI BAR mapping
common/sxe2: add ioctl interface for DMA map and unmap
net/sxe2: support queue setup and control
drivers: add data path for Rx and Tx
net/sxe2: add vectorized Rx and Tx
net/sxe2: implement Tx done cleanup
.mailmap | 1 +
doc/guides/nics/features/sxe2.ini | 29 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/sxe2.rst | 34 +
doc/guides/rel_notes/release_26_07.rst | 4 +
drivers/common/sxe2/meson.build | 15 +
drivers/common/sxe2/sxe2_common.c | 683 +++++++++++++
drivers/common/sxe2/sxe2_common.h | 85 ++
drivers/common/sxe2/sxe2_common_log.h | 81 ++
drivers/common/sxe2/sxe2_host_regs.h | 707 +++++++++++++
drivers/common/sxe2/sxe2_internal_ver.h | 33 +
drivers/common/sxe2/sxe2_ioctl_chnl.c | 325 ++++++
drivers/common/sxe2/sxe2_ioctl_chnl.h | 130 +++
drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 62 ++
drivers/common/sxe2/sxe2_osal.h | 73 ++
drivers/meson.build | 1 +
drivers/net/meson.build | 1 +
drivers/net/sxe2/meson.build | 32 +
drivers/net/sxe2/sxe2_cmd_chnl.c | 323 ++++++
drivers/net/sxe2/sxe2_cmd_chnl.h | 37 +
drivers/net/sxe2/sxe2_drv_cmd.h | 388 ++++++++
drivers/net/sxe2/sxe2_ethdev.c | 968 ++++++++++++++++++
drivers/net/sxe2/sxe2_ethdev.h | 318 ++++++
drivers/net/sxe2/sxe2_irq.h | 48 +
drivers/net/sxe2/sxe2_queue.c | 66 ++
drivers/net/sxe2/sxe2_queue.h | 195 ++++
drivers/net/sxe2/sxe2_rx.c | 559 +++++++++++
drivers/net/sxe2/sxe2_rx.h | 34 +
drivers/net/sxe2/sxe2_tx.c | 420 ++++++++
drivers/net/sxe2/sxe2_tx.h | 32 +
drivers/net/sxe2/sxe2_txrx.c | 357 +++++++
drivers/net/sxe2/sxe2_txrx.h | 23 +
drivers/net/sxe2/sxe2_txrx_common.h | 540 ++++++++++
drivers/net/sxe2/sxe2_txrx_poll.c | 1045 ++++++++++++++++++++
drivers/net/sxe2/sxe2_txrx_poll.h | 20 +
drivers/net/sxe2/sxe2_txrx_vec.c | 200 ++++
drivers/net/sxe2/sxe2_txrx_vec.h | 73 ++
drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 +++++
drivers/net/sxe2/sxe2_txrx_vec_sse.c | 549 ++++++++++
drivers/net/sxe2/sxe2_vsi.c | 214 ++++
drivers/net/sxe2/sxe2_vsi.h | 204 ++++
41 files changed, 9145 insertions(+)
create mode 100644 doc/guides/nics/features/sxe2.ini
create mode 100644 doc/guides/nics/sxe2.rst
create mode 100644 drivers/common/sxe2/meson.build
create mode 100644 drivers/common/sxe2/sxe2_common.c
create mode 100644 drivers/common/sxe2/sxe2_common.h
create mode 100644 drivers/common/sxe2/sxe2_common_log.h
create mode 100644 drivers/common/sxe2/sxe2_host_regs.h
create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h
create mode 100644 drivers/common/sxe2/sxe2_osal.h
create mode 100644 drivers/net/sxe2/meson.build
create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c
create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h
create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h
create mode 100644 drivers/net/sxe2/sxe2_ethdev.c
create mode 100644 drivers/net/sxe2/sxe2_ethdev.h
create mode 100644 drivers/net/sxe2/sxe2_irq.h
create mode 100644 drivers/net/sxe2/sxe2_queue.c
create mode 100644 drivers/net/sxe2/sxe2_queue.h
create mode 100644 drivers/net/sxe2/sxe2_rx.c
create mode 100644 drivers/net/sxe2/sxe2_rx.h
create mode 100644 drivers/net/sxe2/sxe2_tx.c
create mode 100644 drivers/net/sxe2/sxe2_tx.h
create mode 100644 drivers/net/sxe2/sxe2_txrx.c
create mode 100644 drivers/net/sxe2/sxe2_txrx.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c
create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c
create mode 100644 drivers/net/sxe2/sxe2_vsi.c
create mode 100644 drivers/net/sxe2/sxe2_vsi.h
--
2.47.3
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v15 01/11] mailmap: add Jie Liu
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 02/11] doc: add sxe2 guide and release notes liujie5
` (9 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
.mailmap | 1 +
1 file changed, 1 insertion(+)
diff --git a/.mailmap b/.mailmap
index 895412e568..d2c4485636 100644
--- a/.mailmap
+++ b/.mailmap
@@ -739,6 +739,7 @@ Jiawen Wu <jiawenwu@trustnetic.com>
Jiayu Hu <hujiayu.hu@foxmail.com> <jiayu.hu@intel.com>
Jie Hai <haijie1@huawei.com>
Jie Liu <jie2.liu@hxt-semitech.com>
+Jie Liu <liujie5@linkdatatechnology.com>
Jie Pan <panjie5@jd.com>
Jie Wang <jie1x.wang@intel.com>
Jie Zhou <jizh@linux.microsoft.com> <jizh@microsoft.com>
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 02/11] doc: add sxe2 guide and release notes
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
2026-05-16 7:46 ` [PATCH v15 01/11] mailmap: add Jie Liu liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 03/11] common/sxe2: add sxe2 basic structures liujie5
` (8 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Add a new guide for SXE2 PMD in the nics directory.
The guide contains driver capabilities, prerequisites,
and compilation/usage instructions.
Update the release notes to announce the addition of the
sxe2 network driver.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
doc/guides/nics/features/sxe2.ini | 29 ++++++++++++++++++++++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/sxe2.rst | 34 ++++++++++++++++++++++++++
doc/guides/rel_notes/release_26_07.rst | 4 +++
4 files changed, 68 insertions(+)
create mode 100644 doc/guides/nics/features/sxe2.ini
create mode 100644 doc/guides/nics/sxe2.rst
diff --git a/doc/guides/nics/features/sxe2.ini b/doc/guides/nics/features/sxe2.ini
new file mode 100644
index 0000000000..7e350bab54
--- /dev/null
+++ b/doc/guides/nics/features/sxe2.ini
@@ -0,0 +1,29 @@
+;
+; Supported features of the 'sxe2' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Fast mbuf free = P
+Free Tx mbuf on demand = Y
+Burst mode info = Y
+Queue start/stop = Y
+Buffer split on Rx = P
+Scattered Rx = Y
+CRC offload = Y
+VLAN offload = Y
+QinQ offload = P
+L3 checksum offload = Y
+L4 checksum offload = Y
+Timestamp offload = P
+Inner L3 checksum = P
+Inner L4 checksum = P
+Rx descriptor status = Y
+Tx descriptor status = Y
+FreeBSD = Y
+Linux = Y
+x86-32 = Y
+x86-64 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index cb818284fe..e20be478f8 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -68,6 +68,7 @@ Network Interface Controller Drivers
rnp
sfc_efx
softnic
+ sxe2
tap
thunderx
txgbe
diff --git a/doc/guides/nics/sxe2.rst b/doc/guides/nics/sxe2.rst
new file mode 100644
index 0000000000..7fcf9c085b
--- /dev/null
+++ b/doc/guides/nics/sxe2.rst
@@ -0,0 +1,34 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+
+SXE2 Poll Mode Driver
+======================
+
+The sxe2 PMD (**librte_net_sxe2**) provides poll mode driver support for
+10/25/50/100 Gbps Network Adapters.
+The embedded switch, Physical Functions (PF),
+and SR-IOV Virtual Functions (VF) are supported.
+
+Implementation details
+----------------------
+
+The sxe2 PMD is designed to operate alongside the sxe2 kernel network driver.
+For management and control operations, the PMD communicates with the kernel
+driver via ioctl interfaces. These commands are processed by the kernel
+driver and subsequently dispatched to the hardware firmware for execution.
+
+For security and robustness, the driver's data path is optimized to operate
+using virtual addresses (IOVA as VA mode). However, to ensure full
+compatibility in system environments where an IOMMU is absent or disabled,
+the driver also provides an explicit path to support physical addressing
+(IOVA as PA mode).
+
+The hardware is capable of handling the corresponding IOVA addresses (either
+VA or PA) directly, as provided by the DPDK memory subsystem. This ensures
+that DPDK applications can only access memory segments explicitly allocated
+to the current process, preventing unauthorized access to random physical
+memory.
+
+This capability allows the PMD to coexist with kernel network interfaces
+which remain functional, although they stop receiving unicast packets as
+long as they share the same MAC address.
diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst
index f012d47a4b..fa0f0f5cca 100644
--- a/doc/guides/rel_notes/release_26_07.rst
+++ b/doc/guides/rel_notes/release_26_07.rst
@@ -64,6 +64,10 @@ New Features
* ``--auto-probing`` enables the initial bus probing, which is the current default behavior.
+* **Added Linkdata sxe2 ethernet driver.**
+
+ Added network driver for the Linkdata Network Adapters.
+
Removed Items
-------------
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 03/11] common/sxe2: add sxe2 basic structures
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
2026-05-16 7:46 ` [PATCH v15 01/11] mailmap: add Jie Liu liujie5
2026-05-16 7:46 ` [PATCH v15 02/11] doc: add sxe2 guide and release notes liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 04/11] drivers: add base driver skeleton liujie5
` (7 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
This patch adds the base infrastructure for the sxe2 common
library. It includes the mandatory OS abstraction layer (OSAL),
common structure definitions, error codes, and the logging
system implementation.
Specifically, this commit:
- Implements the logging stream management using RTE_LOG_LINE.
- Defines device-specific error codes and status registers.
- Adds the initial meson build configuration for the common library.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/sxe2_common_log.h | 81 +++
drivers/common/sxe2/sxe2_host_regs.h | 707 ++++++++++++++++++++++++
drivers/common/sxe2/sxe2_internal_ver.h | 33 ++
drivers/common/sxe2/sxe2_osal.h | 89 +++
4 files changed, 910 insertions(+)
create mode 100644 drivers/common/sxe2/sxe2_common_log.h
create mode 100644 drivers/common/sxe2/sxe2_host_regs.h
create mode 100644 drivers/common/sxe2/sxe2_internal_ver.h
create mode 100644 drivers/common/sxe2/sxe2_osal.h
diff --git a/drivers/common/sxe2/sxe2_common_log.h b/drivers/common/sxe2/sxe2_common_log.h
new file mode 100644
index 0000000000..e7a9a5f8d0
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_common_log.h
@@ -0,0 +1,81 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_COMMON_LOG_H__
+#define __SXE2_COMMON_LOG_H__
+
+extern int32_t sxe2_common_log;
+extern int32_t sxe2_log_init;
+extern int32_t sxe2_log_driver;
+extern int32_t sxe2_log_rx;
+extern int32_t sxe2_log_tx;
+extern int32_t sxe2_log_hw;
+
+#define RTE_LOGTYPE_SXE2_COM sxe2_common_log
+#define RTE_LOGTYPE_SXE2_INIT sxe2_log_init
+#define RTE_LOGTYPE_SXE2_DRV sxe2_log_driver
+#define RTE_LOGTYPE_SXE2_RX sxe2_log_rx
+#define RTE_LOGTYPE_SXE2_TX sxe2_log_tx
+#define RTE_LOGTYPE_SXE2_HW sxe2_log_hw
+
+#define SXE2_PMD_LOG(level, log_type, ...) \
+ RTE_LOG_LINE_PREFIX(level, log_type, "%s(): ", \
+ __func__, __VA_ARGS__)
+
+#define SXE2_PMD_DRV_LOG(level, log_type, adapter, ...) \
+ RTE_LOG_LINE_PREFIX(level, log_type, "%s(): port:%u ", \
+ __func__ RTE_LOG_COMMA \
+ adapter->dev_port_id, __VA_ARGS__)
+
+#define PMD_LOG_DEBUG(logtype, fmt, ...) \
+ SXE2_PMD_LOG(DEBUG, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_INFO(logtype, fmt, ...) \
+ SXE2_PMD_LOG(INFO, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_NOTICE(logtype, fmt, ...) \
+ SXE2_PMD_LOG(NOTICE, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_WARN(logtype, fmt, ...) \
+ SXE2_PMD_LOG(WARNING, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_ERR(logtype, fmt, ...) \
+ SXE2_PMD_LOG(ERR, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_CRIT(logtype, fmt, ...) \
+ SXE2_PMD_LOG(CRIT, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_ALERT(logtype, fmt, ...) \
+ SXE2_PMD_LOG(ALERT, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_LOG_EMERG(logtype, fmt, ...) \
+ SXE2_PMD_LOG(EMERG, SXE2_##logtype, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_DEBUG(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(DEBUG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_INFO(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(INFO, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_NOTICE(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(NOTICE, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_WARN(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(WARNING, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_ERR(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(ERR, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_CRIT(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(CRIT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_ALERT(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(ALERT, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_DEV_LOG_EMERG(adapter, logtype, fmt, ...) \
+ SXE2_PMD_DRV_LOG(EMERG, SXE2_##logtype, adapter, fmt, ##__VA_ARGS__)
+
+#define PMD_INIT_FUNC_TRACE() PMD_LOG_DEBUG(INIT, " >>")
+
+#endif /* __SXE2_COMMON_LOG_H__ */
diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h
new file mode 100644
index 0000000000..984ea6214c
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_host_regs.h
@@ -0,0 +1,707 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_HOST_REGS_H__
+#define __SXE2_HOST_REGS_H__
+
+#define SXE2_BITS_MASK(m, s) ((m ## UL) << (s))
+
+#define SXE2_RXQ_CTXT(_i, _QRX) (0x0050000 + ((_i) * 4 + (_QRX) * 0x20))
+#define SXE2_RXQ_HEAD(_QRX) (0x0060000 + ((_QRX) * 4))
+#define SXE2_RXQ_TAIL(_QRX) (0x0070000 + ((_QRX) * 4))
+#define SXE2_RXQ_CTRL(_QRX) (0x006d000 + ((_QRX) * 4))
+#define SXE2_RXQ_WB(_QRX) (0x006B000 + ((_QRX) * 4))
+
+#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004
+#define SXE2_RXQ_CTRL_ENABLED 0x00000001
+#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3)
+
+#define SXE2_PCIEPROC_BASE 0x002d6000
+
+#define SXE2_PF_INT_BASE 0x00260000
+#define SXE2_PF_INT_ALLOC (SXE2_PF_INT_BASE + 0x0000)
+#define SXE2_PF_INT_ALLOC_FIRST 0x7FF
+#define SXE2_PF_INT_ALLOC_LAST_S 12
+#define SXE2_PF_INT_ALLOC_LAST \
+ (0x7FF << SXE2_PF_INT_ALLOC_LAST_S)
+#define SXE2_PF_INT_ALLOC_VALID BIT(31)
+
+#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040)
+#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0)
+#define SXE2_PF_INT_OICR_UR BIT(1)
+#define SXE2_PF_INT_OICR_CA BIT(2)
+#define SXE2_PF_INT_OICR_VFLR BIT(3)
+#define SXE2_PF_INT_OICR_VFR_DONE BIT(4)
+#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5)
+#define SXE2_PF_INT_OICR_BFDE BIT(6)
+#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7)
+#define SXE2_PF_INT_OICR_ECC_ERR BIT(8)
+#define SXE2_PF_INT_OICR_GPIO BIT(9)
+#define SXE2_PF_INT_OICR_TSYN_TX BIT(11)
+#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12)
+#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13)
+#define SXE2_PF_INT_OICR_EXHAUST BIT(14)
+#define SXE2_PF_INT_OICR_FW BIT(15)
+#define SXE2_PF_INT_OICR_SWINT BIT(16)
+#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17)
+#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18)
+#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19)
+#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20)
+#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21)
+#define SXE2_PF_INT_OICR_GRST BIT(22)
+#define SXE2_PF_INT_OICR_FWQ_INT BIT(29)
+#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30)
+#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31)
+
+#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020)
+
+#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100)
+#define SXE2_PF_INT_FW_ABNORMAL BIT(0)
+#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1)
+#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18)
+#define SXE2_PF_INT_VFLR_DONE BIT(2)
+
+#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060)
+#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF
+#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11
+#define SXE2_PF_INT_OICR_CTL_ITR_IDX \
+ (0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S)
+#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30)
+
+#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0)
+#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF
+#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11
+#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \
+ (0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S)
+#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30)
+
+#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0)
+#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF
+#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11
+#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S)
+#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30)
+
+#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100)
+#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x)
+
+#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120)
+#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7
+#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2)
+#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4)
+#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4)
+#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8)
+
+#define SXE2_VFG_RAM_INIT_DONE \
+ (SXE2_PF_INT_BASE + 0x0128)
+#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0)
+#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1)
+#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2)
+
+#define SXE2_LINK_REG_GET_10G_VALUE 4
+#define SXE2_LINK_REG_GET_25G_VALUE 1
+#define SXE2_LINK_REG_GET_50G_VALUE 2
+#define SXE2_LINK_REG_GET_100G_VALUE 3
+
+#define SXE2_PORT0_CNT 0
+#define SXE2_PORT1_CNT 1
+#define SXE2_PORT2_CNT 2
+#define SXE2_PORT3_CNT 3
+
+#define SXE2_LINK_STATUS_BASE (0x002ac200)
+#define SXE2_LINK_STATUS_PORT0_POS 3
+#define SXE2_LINK_STATUS_PORT1_POS 11
+#define SXE2_LINK_STATUS_PORT2_POS 19
+#define SXE2_LINK_STATUS_PORT3_POS 27
+#define SXE2_LINK_STATUS_MASK 1
+
+#define SXE2_LINK_SPEED_BASE (0x002ac200)
+#define SXE2_LINK_SPEED_PORT0_POS 0
+#define SXE2_LINK_SPEED_PORT1_POS 8
+#define SXE2_LINK_SPEED_PORT2_POS 16
+#define SXE2_LINK_SPEED_PORT3_POS 24
+#define SXE2_LINK_SPEED_MASK 7
+
+#define SXE2_PFVP_INT_ALLOC(vf_idx) (SXE2_PF_INT_BASE + 0x012C + ((vf_idx) * 4))
+#define SXE2_PFVP_INT_ALLOC_FIRST_S 0
+
+#define SXE2_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PFVP_INT_ALLOC_FIRST_S)
+#define SXE2_PFVP_INT_ALLOC_LAST_S 12
+#define SXE2_PFVP_INT_ALLOC_LAST_M \
+ (0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S)
+#define SXE2_PFVP_INT_ALLOC_VALID BIT(31)
+
+#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4))
+#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0
+
+#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_M (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_FIRST_S)
+#define SXE2_PCI_PFVP_INT_ALLOC_LAST_S 12
+
+#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \
+ (0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S)
+#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31)
+
+#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4))
+#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0
+#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_M (0xFF << SXE2_PCIEPROC_INT2FUNC_VF_NUM_S)
+#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12
+#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S)
+#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16
+#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16)
+
+#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4))
+#define SXE2_VSI_PF_ID_S 0
+#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S)
+#define SXE2_VSI_PF_EN_M BIT(3)
+
+#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4))
+#define SXE2_MBX_CTL_MSIX_INDX_S 0
+#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S)
+#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30)
+
+#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx))
+#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF
+#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11
+#define SXE2_PF_INT_TQCTL_ITR_IDX \
+ (0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S)
+#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30)
+
+#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx))
+#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF
+#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11
+#define SXE2_PF_INT_RQCTL_ITR_IDX \
+ (0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S)
+#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30)
+
+#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx))
+#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F)
+#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \
+ (0x3F)
+#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6))
+#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7)
+#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \
+ (0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT)
+
+#define SXE2_VF_INT_ITR(itr_idx, irq_idx) \
+ (SXE2_PF_INT_BASE + 0xB530 + 0x2000 * (itr_idx) + 4 * (irq_idx))
+#define SXE2_VF_INT_ITR_INTERVAL 0xFFF
+
+#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx))
+#define SXE2_VF_DYN_CTL_INTENABLE BIT(0)
+#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1)
+#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2)
+#define SXE2_VF_DYN_CTL_ITR_IDX_S \
+ 3
+#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3
+#define SXE2_VF_DYN_CTL_INTERVAL_S 5
+#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF
+#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24)
+#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25
+#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3
+
+#define SXE2_VF_DYN_CTL_INTENABLE_MSK \
+ BIT(31)
+
+#define SXE2_BAR4_MSIX_BASE 0
+#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10))
+#define SXE2_BAR4_MSIX_ENABLE 0
+#define SXE2_BAR4_MSIX_DISABLE 1
+
+#define SXE2_TXQ_LEGACY_DBLL(_DBQM) (0x1000 + ((_DBQM) * 4))
+
+#define SXE2_TXQ_CONTEXT0(_pfIdx) (0x10040 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT1(_pfIdx) (0x10044 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT2(_pfIdx) (0x10048 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT3(_pfIdx) (0x1004C + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT4(_pfIdx) (0x10050 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT7(_pfIdx) (0x1005C + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CONTEXT7_HEAD_S 0
+#define SXE2_TXQ_CONTEXT7_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_HEAD_S)
+#define SXE2_TXQ_CONTEXT7_READ_HEAD_S 16
+#define SXE2_TXQ_CONTEXT7_READ_HEAD_M SXE2_BITS_MASK(0xFFF, SXE2_TXQ_CONTEXT7_READ_HEAD_S)
+
+#define SXE2_TXQ_CTRL(_pfIdx) (0x10064 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_CTXT_CTRL(_pfIdx) (0x100C8 + ((_pfIdx) * 0x100))
+#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100))
+
+#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800
+#define SXE2_TXQ_CTRL_SW_EN_M BIT(0)
+#define SXE2_TXQ_CTRL_HW_EN_M BIT(1)
+
+#define SXE2_TXQ_CTXT2_PROT_IDX_S 0
+#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0)
+#define SXE2_TXQ_CTXT2_CGD_IDX_S 4
+#define SXE2_TXQ_CTXT2_CGD_IDX_M SXE2_BITS_MASK(0x1F, 4)
+#define SXE2_TXQ_CTXT2_PF_IDX_S 9
+#define SXE2_TXQ_CTXT2_PF_IDX_M SXE2_BITS_MASK(0x7, 9)
+#define SXE2_TXQ_CTXT2_VMVF_IDX_S 12
+#define SXE2_TXQ_CTXT2_VMVF_IDX_M SXE2_BITS_MASK(0x3FF, 12)
+#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23
+#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23)
+#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25
+#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25)
+#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26
+#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26)
+#define SXE2_TXQ_CTXT2_WB_MODE_S 27
+#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27)
+#define SXE2_TXQ_CTXT2_ITR_WB_S 28
+#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28)
+#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29
+#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29)
+#define SXE2_TXQ_CTXT2_SSO_EN_S 30
+#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30)
+
+#define SXE2_TXQ_CTXT3_SRC_VSI_S 0
+#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0)
+#define SXE2_TXQ_CTXT3_CPU_ID_S 12
+#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12)
+#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20
+#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20)
+#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21
+#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21)
+#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22
+#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22)
+
+#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0
+#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0)
+#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13
+#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13)
+#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14
+#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14)
+#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15
+#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15)
+#define SXE2_TXQ_CTXT3_QLEN_S 16
+#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16)
+
+#define SXE2_RX_BUF_CHAINED_MAX 10
+#define SXE2_RX_DESC_BASE_ADDR_UNIT 7
+#define SXE2_RX_HBUF_LEN_UNIT 6
+#define SXE2_RX_DBUF_LEN_UNIT 7
+#define SXE2_RX_DBUF_LEN_MASK (~0x7F)
+#define SXE2_RX_HWTAIL_VALUE_MASK (~0x7)
+
+enum {
+ SXE2_RX_CTXT0 = 0,
+ SXE2_RX_CTXT1,
+ SXE2_RX_CTXT2,
+ SXE2_RX_CTXT3,
+ SXE2_RX_CTXT4,
+ SXE2_RX_CTXT_CNT,
+};
+
+#define SXE2_RX_CTXT_BASE_L_S 0
+#define SXE2_RX_CTXT_BASE_L_W 32
+
+#define SXE2_RX_CTXT_BASE_H_S 0
+#define SXE2_RX_CTXT_BASE_H_W 25
+#define SXE2_RX_CTXT_DEPTH_L_S 25
+#define SXE2_RX_CTXT_DEPTH_L_W 7
+
+#define SXE2_RX_CTXT_DEPTH_H_S 0
+#define SXE2_RX_CTXT_DEPTH_H_W 6
+
+#define SXE2_RX_CTXT_DBUFF_S 6
+#define SXE2_RX_CTXT_DBUFF_W 7
+
+#define SXE2_RX_CTXT_HBUFF_S 13
+#define SXE2_RX_CTXT_HBUFF_W 5
+
+#define SXE2_RX_CTXT_HSPLT_TYPE_S 18
+#define SXE2_RX_CTXT_HSPLT_TYPE_W 2
+
+#define SXE2_RX_CTXT_DESC_TYPE_S 20
+#define SXE2_RX_CTXT_DESC_TYPE_W 1
+
+#define SXE2_RX_CTXT_CRC_S 21
+#define SXE2_RX_CTXT_CRC_W 1
+
+#define SXE2_RX_CTXT_L2TAG_FLAG_S 23
+#define SXE2_RX_CTXT_L2TAG_FLAG_W 1
+
+#define SXE2_RX_CTXT_HSPLT_0_S 24
+#define SXE2_RX_CTXT_HSPLT_0_W 4
+
+#define SXE2_RX_CTXT_HSPLT_1_S 28
+#define SXE2_RX_CTXT_HSPLT_1_W 2
+
+#define SXE2_RX_CTXT_INVALN_STP_S 31
+#define SXE2_RX_CTXT_INVALN_STP_W 1
+
+#define SXE2_RX_CTXT_LRO_ENABLE_S 0
+#define SXE2_RX_CTXT_LRO_ENABLE_W 1
+
+#define SXE2_RX_CTXT_CPUID_S 3
+#define SXE2_RX_CTXT_CPUID_W 8
+
+#define SXE2_RX_CTXT_MAX_FRAME_SIZE_S 11
+#define SXE2_RX_CTXT_MAX_FRAME_SIZE_W 14
+
+#define SXE2_RX_CTXT_LRO_DESC_MAX_S 25
+#define SXE2_RX_CTXT_LRO_DESC_MAX_W 4
+
+#define SXE2_RX_CTXT_RELAX_DATA_S 29
+#define SXE2_RX_CTXT_RELAX_DATA_W 1
+
+#define SXE2_RX_CTXT_RELAX_WB_S 30
+#define SXE2_RX_CTXT_RELAX_WB_W 1
+
+#define SXE2_RX_CTXT_RELAX_RD_S 31
+#define SXE2_RX_CTXT_RELAX_RD_W 1
+
+#define SXE2_RX_CTXT_THPRDESC_ENABLE_S 1
+#define SXE2_RX_CTXT_THPRDESC_ENABLE_W 1
+
+#define SXE2_RX_CTXT_THPWDESC_ENABLE_S 2
+#define SXE2_RX_CTXT_THPWDESC_ENABLE_W 1
+
+#define SXE2_RX_CTXT_THPRDATA_ENABLE_S 3
+#define SXE2_RX_CTXT_THPRDATA_ENABLE_W 1
+
+#define SXE2_RX_CTXT_THPHEAD_ENABLE_S 4
+#define SXE2_RX_CTXT_THPHEAD_ENABLE_W 1
+
+#define SXE2_RX_CTXT_LOW_DESC_LINE_S 6
+#define SXE2_RX_CTXT_LOW_DESC_LINE_W 3
+
+#define SXE2_RX_CTXT_VF_ID_S 9
+#define SXE2_RX_CTXT_VF_ID_W 8
+
+#define SXE2_RX_CTXT_PF_ID_S 17
+#define SXE2_RX_CTXT_PF_ID_W 3
+
+#define SXE2_RX_CTXT_VF_ENABLE_S 20
+#define SXE2_RX_CTXT_VF_ENABLE_W 1
+
+#define SXE2_RX_CTXT_VSI_ID_S 21
+#define SXE2_RX_CTXT_VSI_ID_W 10
+
+#define SXE2_PF_CTRLQ_FW_BASE 0x00312000
+#define SXE2_PF_CTRLQ_FW_ATQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0000)
+#define SXE2_PF_CTRLQ_FW_ARQBAL (SXE2_PF_CTRLQ_FW_BASE + 0x0080)
+#define SXE2_PF_CTRLQ_FW_ATQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0100)
+#define SXE2_PF_CTRLQ_FW_ARQBAH (SXE2_PF_CTRLQ_FW_BASE + 0x0180)
+#define SXE2_PF_CTRLQ_FW_ATQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0200)
+#define SXE2_PF_CTRLQ_FW_ARQLEN (SXE2_PF_CTRLQ_FW_BASE + 0x0280)
+#define SXE2_PF_CTRLQ_FW_ATQH (SXE2_PF_CTRLQ_FW_BASE + 0x0300)
+#define SXE2_PF_CTRLQ_FW_ARQH (SXE2_PF_CTRLQ_FW_BASE + 0x0380)
+#define SXE2_PF_CTRLQ_FW_ATQT (SXE2_PF_CTRLQ_FW_BASE + 0x0400)
+#define SXE2_PF_CTRLQ_FW_ARQT (SXE2_PF_CTRLQ_FW_BASE + 0x0480)
+
+#define SXE2_PF_CTRLQ_MBX_BASE 0x00316000
+#define SXE2_PF_CTRLQ_MBX_ATQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE100)
+#define SXE2_PF_CTRLQ_MBX_ATQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE180)
+#define SXE2_PF_CTRLQ_MBX_ATQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE200)
+#define SXE2_PF_CTRLQ_MBX_ATQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE280)
+#define SXE2_PF_CTRLQ_MBX_ATQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE300)
+#define SXE2_PF_CTRLQ_MBX_ARQBAL (SXE2_PF_CTRLQ_MBX_BASE + 0xE380)
+#define SXE2_PF_CTRLQ_MBX_ARQBAH (SXE2_PF_CTRLQ_MBX_BASE + 0xE400)
+#define SXE2_PF_CTRLQ_MBX_ARQLEN (SXE2_PF_CTRLQ_MBX_BASE + 0xE480)
+#define SXE2_PF_CTRLQ_MBX_ARQH (SXE2_PF_CTRLQ_MBX_BASE + 0xE500)
+#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580)
+
+#define SXE2_CMD_REG_LEN_M 0x3FF
+#define SXE2_CMD_REG_LEN_VFE_M BIT(28)
+#define SXE2_CMD_REG_LEN_OVFL_M BIT(29)
+#define SXE2_CMD_REG_LEN_CRIT_M BIT(30)
+#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31)
+
+#define SXE2_CMD_REG_HEAD_M 0x3FF
+
+#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500)
+#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0)
+#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1)
+
+#define SXE2_TOP_CFG_BASE 0x00292000
+#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c)
+#define SXE2_HW_FPGA_VER_M SXE2_BITS_MASK(0xFFF, 0)
+
+#define SXE2_FW_VER (SXE2_TOP_CFG_BASE + 0x214)
+#define SXE2_FW_VER_BUILD_M SXE2_BITS_MASK(0xFF, 0)
+#define SXE2_FW_VER_FIX_M SXE2_BITS_MASK(0xFF, 8)
+#define SXE2_FW_VER_SUB_M SXE2_BITS_MASK(0xFF, 16)
+#define SXE2_FW_VER_MAIN_M SXE2_BITS_MASK(0xFF, 24)
+#define SXE2_FW_VER_FIX_SHIFT (8)
+#define SXE2_FW_VER_SUB_SHIFT (16)
+#define SXE2_FW_VER_MAIN_SHIFT (24)
+
+#define SXE2_FW_COMP_VER_ADDR (SXE2_TOP_CFG_BASE + 0x20c)
+
+#define SXE2_STATUS SXE2_FW_VER
+
+#define SXE2_FW_STATE (SXE2_TOP_CFG_BASE + 0x210)
+
+#define SXE2_FW_HEARTBEAT (SXE2_TOP_CFG_BASE + 0x218)
+
+#define SXE2_FW_MISC (SXE2_TOP_CFG_BASE + 0x21c)
+#define SXE2_FW_MISC_MODE_M SXE2_BITS_MASK(0xF, 0)
+#define SXE2_FW_MISC_POP_M SXE2_BITS_MASK(0x80000000, 0)
+
+#define SXE2_TX_OE_BASE 0x00030000
+#define SXE2_RX_OE_BASE 0x00050000
+
+#define SXE2_PFP_L2TAGSEN(_i) (SXE2_TX_OE_BASE + 0x00300 + ((_i) * 4))
+#define SXE2_VSI_L2TAGSTXVALID(_i) \
+ (SXE2_TX_OE_BASE + 0x01000 + ((_i) * 4))
+#define SXE2_VSI_TIR0(_i) (SXE2_TX_OE_BASE + 0x01C00 + ((_i) * 4))
+#define SXE2_VSI_TIR1(_i) (SXE2_TX_OE_BASE + 0x02800 + ((_i) * 4))
+#define SXE2_VSI_TAR(_i) (SXE2_TX_OE_BASE + 0x04C00 + ((_i) * 4))
+#define SXE2_VSI_TSR(_i) (SXE2_RX_OE_BASE + 0x18000 + ((_i) * 4))
+
+#define SXE2_STATS_TX_LAN_CONFIG(_i) (SXE2_TX_OE_BASE + 0x08300 + ((_i) * 4))
+#define SXE2_STATS_TX_LAN_PKT_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08340 + ((_i) * 4))
+#define SXE2_STATS_TX_LAN_BYTE_CNT_GET(_i) (SXE2_TX_OE_BASE + 0x08380 + ((_i) * 4))
+
+#define SXE2_STATS_RX_CONFIG(_i) (SXE2_RX_OE_BASE + 0x230B0 + ((_i) * 4))
+#define SXE2_STATS_RX_LAN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230C0 + ((_i) * 8))
+#define SXE2_STATS_RX_LAN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23120 + ((_i) * 8))
+#define SXE2_STATS_RX_FD_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x230E0 + ((_i) * 8))
+#define SXE2_STATS_RX_MNG_IN_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23100 + ((_i) * 8))
+#define SXE2_STATS_RX_MNG_IN_BYTE_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23140 + ((_i) * 8))
+#define SXE2_STATS_RX_MNG_OUT_PKT_CNT_GET(_i) (SXE2_RX_OE_BASE + 0x23160 + ((_i) * 8))
+
+#define SXE2_L2TAG_ID_STAG 0
+#define SXE2_L2TAG_ID_OUT_VLAN1 1
+#define SXE2_L2TAG_ID_OUT_VLAN2 2
+#define SXE2_L2TAG_ID_VLAN 3
+
+#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF
+#define SXE2_PFP_L2TAGSEN_DVM BIT(10)
+
+#define SXE2_VSI_TSR_STRIP_TAG_S 0
+#define SXE2_VSI_TSR_SHOW_TAG_S 4
+
+#define SXE2_VSI_TSR_ID_STAG BIT(0)
+#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1)
+#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2)
+#define SXE2_VSI_TSR_ID_VLAN BIT(3)
+
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3)
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7)
+#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16
+#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19)
+#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20
+#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23)
+
+#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0
+#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2
+#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN2 3
+#define SXE2_VSI_L2TAGSTXVALID_ID_VLAN 4
+
+#define SXE2_SWITCH_OG_BASE 0x00140000
+#define SXE2_SWITCH_SWE_BASE 0x00150000
+#define SXE2_SWITCH_RG_BASE 0x00160000
+
+#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4))
+#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4))
+
+#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9)
+
+#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1)
+#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2)
+#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3)
+#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9)
+
+#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16)
+
+#define SXE2_PCIE_SYS_READY 0x38c
+#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0)
+#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2)
+#define SXE2_PCIE_SYS_READY_R5 BIT(3)
+#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16)
+
+#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78
+#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21)
+
+#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630)
+#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586
+
+#define SXE2_PFGEN_CTRL (0x00336000)
+#define SXE2_PFGEN_CTRL_PFSWR BIT(0)
+
+#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4))
+#define SXE2_VFGEN_CTRL_VFSWR BIT(0)
+
+#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1)
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10))
+
+#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4))
+
+#define SXE2_ACCEPT_RULE_TAGGED_S 0
+#define SXE2_ACCEPT_RULE_UNTAGGED_S 16
+
+#define SXE2_VF_RXQ_BASE(_VF) (0x000b0800 + ((_VF) * 4))
+#define SXE2_VF_RXQ_BASE_FIRST_Q_S 0
+#define SXE2_VF_RXQ_BASE_FIRST_Q_M (0x7FF << SXE2_VF_RXQ_BASE_FIRST_Q_S)
+#define SXE2_VF_RXQ_BASE_Q_NUM_S 16
+#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S)
+
+#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4))
+#define SXE2_VF_RXQ_MAPENA_M BIT(0)
+
+#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4))
+#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0
+#define SXE2_VF_TXQ_BASE_FIRST_Q_M (0x3FFF << SXE2_VF_TXQ_BASE_FIRST_Q_S)
+#define SXE2_VF_TXQ_BASE_Q_NUM_S 16
+#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S)
+
+#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4))
+#define SXE2_VF_TXQ_MAPENA_M BIT(0)
+
+#define PRI_PTP_BASEADDR 0x2a8000
+
+#define GLTSYN (PRI_PTP_BASEADDR + 0x0)
+#define GLTSYN_ENA_M BIT(0)
+
+#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4)
+#define GLTSYN_CMD_INIT_TIME 0x01
+#define GLTSYN_CMD_INIT_INCVAL 0x02
+#define GLTSYN_CMD_ADJ_TIME 0x04
+#define GLTSYN_CMD_ADJ_TIME_AT_TIME 0x0C
+#define GLTSYN_CMD_LATCHING_SHTIME 0x80
+
+#define GLTSYN_SYNC (PRI_PTP_BASEADDR + 0x8)
+#define GLTSYN_SYNC_PLUS_1NS 0x1
+#define GLTSYN_SYNC_MINUS_1NS 0x2
+#define GLTSYN_SYNC_EXEC 0x3
+#define GLTSYN_SYNC_GEN_PULSE 0x4
+
+#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC)
+#define GLTSYN_SEM_BUSY_M BIT(0)
+
+#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10)
+#define GLTSYN_STAT_EVENT0_M BIT(0)
+#define GLTSYN_STAT_EVENT1_M BIT(1)
+#define GLTSYN_STAT_EVENT2_M BIT(2)
+
+#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20)
+#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24)
+#define GLTSYN_TIME_S_H (PRI_PTP_BASEADDR + 0x28)
+#define GLTSYN_TIME_S_L (PRI_PTP_BASEADDR + 0x2C)
+
+#define GLTSYN_SHTIME_SUBNS (PRI_PTP_BASEADDR + 0x30)
+#define GLTSYN_SHTIME_NS (PRI_PTP_BASEADDR + 0x34)
+#define GLTSYN_SHTIME_S_H (PRI_PTP_BASEADDR + 0x38)
+#define GLTSYN_SHTIME_S_L (PRI_PTP_BASEADDR + 0x3C)
+
+#define GLTSYN_SHADJ_SUBNS (PRI_PTP_BASEADDR + 0x40)
+#define GLTSYN_SHADJ_NS (PRI_PTP_BASEADDR + 0x44)
+
+#define GLTSYN_INCVAL_NS (PRI_PTP_BASEADDR + 0x50)
+#define GLTSYN_INCVAL_SUBNS (PRI_PTP_BASEADDR + 0x54)
+
+#define GLTSYN_TGT_NS(_i) \
+ (PRI_PTP_BASEADDR + 0x60 + ((_i) * 16))
+#define GLTSYN_TGT_S_H(_i) (PRI_PTP_BASEADDR + 0x64 + ((_i) * 16))
+#define GLTSYN_TGT_S_L(_i) (PRI_PTP_BASEADDR + 0x68 + ((_i) * 16))
+
+#define GLTSYN_EVENT_NS(_i) \
+ (PRI_PTP_BASEADDR + 0xA0 + ((_i) * 16))
+
+#define GLTSYN_EVENT_S_H(_i) (PRI_PTP_BASEADDR + 0xA4 + ((_i) * 16))
+#define GLTSYN_EVENT_S_H_MASK (0xFFFF)
+
+#define GLTSYN_EVENT_S_L(_i) (PRI_PTP_BASEADDR + 0xA8 + ((_i) * 16))
+
+#define GLTSYN_AUXOUT(_i) \
+ (PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4))
+#define GLTSYN_AUXOUT_OUT_ENA BIT(0)
+#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1)
+#define GLTSYN_AUXOUT_OUTLVL BIT(3)
+#define GLTSYN_AUXOUT_INT_ENA BIT(4)
+#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3)
+
+#define GLTSYN_CLKO(_i) \
+ (PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4))
+
+#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4))
+#define GLTSYN_AUXIN_RISING_EDGE BIT(0)
+#define GLTSYN_AUXIN_FALLING_EDGE BIT(1)
+#define GLTSYN_AUXIN_ENABLE BIT(4)
+
+#define CGMAC_CSR_BASE 0x2B4000
+
+#define CGMAC_PORT_OFFSET 0x00004000
+
+#define PFP_CGM_TX_TSMEM(_port, _i) \
+ (CGMAC_CSR_BASE + 0x100 + \
+ + CGMAC_PORT_OFFSET * _port + ((_i) * 4))
+
+#define PFP_CGM_TX_TXHI(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x108 + ((_i) * 8))
+#define PFP_CGM_TX_TXLO(_port, _i) (CGMAC_CSR_BASE + CGMAC_PORT_OFFSET * _port + 0x10C + ((_i) * 8))
+
+#define CGMAC_CSR_MAC0_OFFSET 0x2B4000
+#define CGMAC_CSR_MAC_OFFSET(_i) (CGMAC_CSR_MAC0_OFFSET + ((_i) * 0x4000))
+
+#define PFP_CGM_MAC_TX_TSMEM(_phy, _i) \
+ (CGMAC_CSR_MAC_OFFSET(_phy) + 0x100 + \
+ ((_i) * 4))
+
+#define PFP_CGM_MAC_TX_TXHI(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x108 + ((_i) * 8))
+#define PFP_CGM_MAC_TX_TXLO(_phy, _i) (CGMAC_CSR_MAC_OFFSET(_phy) + 0x10C + ((_i) * 8))
+
+#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0)
+#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11
+#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11)
+#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30)
+#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4))
+
+#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0)
+#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11
+#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11)
+#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30)
+#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4))
+
+#define SXE2_IPSEC_TX_BASE (0x2A0000)
+#define SXE2_IPSEC_RX_BASE (0x2A2000)
+
+#define SXE2_IPSEC_RX_IPSIDX_ADDR (SXE2_IPSEC_RX_BASE + 0x0084)
+#define SXE2_IPSEC_RX_IPSIDX_RST (0x00040000)
+#define SXE2_IPSEC_RX_IPSIDX_VBI_SHIFT (18)
+#define SXE2_IPSEC_RX_IPSIDX_VBI_MASK (0x00040000)
+#define SXE2_IPSEC_RX_IPSIDX_SWRITE_SHIFT (17)
+#define SXE2_IPSEC_RX_IPSIDX_SWRITE_MASK (0x00020000)
+#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_SHIFT (4)
+#define SXE2_IPSEC_RX_IPSIDX_SA_IDX_MASK (0x0000fff0)
+#define SXE2_IPSEC_RX_IPSIDX_TABLE_SHIFT (2)
+#define SXE2_IPSEC_RX_IPSIDX_TABLE_MASK (0x0000000c)
+
+#define SXE2_IPSEC_RX_IPSIPID_ADDR (SXE2_IPSEC_RX_BASE + 0x0088)
+#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_SHIFT (0)
+#define SXE2_IPSEC_RX_IPSIPID_IP_ID_X_MASK (0x000000ff)
+
+#define SXE2_IPSEC_RX_IPSSPI0_ADDR (SXE2_IPSEC_RX_BASE + 0x008c)
+#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_SHIFT (0)
+#define SXE2_IPSEC_RX_IPSSPI0_SPI_X_MASK (0xffffffff)
+
+#define SXE2_IPSEC_RX_IPSSPI1_ADDR (SXE2_IPSEC_RX_BASE + 0x0090)
+#define SXE2_IPSEC_RX_IPSSPI1_SPI_Y_MASK (0xffffffff)
+
+#define SXE2_PAUSE_STATS_BASE(port) (0x002b2000 + port * 0x4000)
+#define SXE2_TXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0894)
+#define SXE2_TXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0a18)
+#define SXE2_TXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
+ (0x0a20 + 8 * (pri)))
+#define SXE2_TXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
+ (0x0a60 + 8 * (pri)))
+#define SXE2_TXPFCXONTOXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
+ (0x0aa0 + 8 * (pri)))
+#define SXE2_RXPAUSEXONFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0988)
+#define SXE2_RXPAUSEXOFFFRAMES_LO(port) (SXE2_PAUSE_STATS_BASE(port) + 0x0b28)
+#define SXE2_RXPFCXONFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
+ (0x0b30 + 8 * (pri)))
+#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
+ (0x0b70 + 8 * (pri)))
+
+#endif
diff --git a/drivers/common/sxe2/sxe2_internal_ver.h b/drivers/common/sxe2/sxe2_internal_ver.h
new file mode 100644
index 0000000000..92f49e7a20
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_internal_ver.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_INTERNAL_VER_H__
+#define __SXE2_INTERNAL_VER_H__
+
+#define SXE2_VER_MAJOR_OFFSET (16)
+#define SXE2_MK_VER(major, minor) \
+ (major << SXE2_VER_MAJOR_OFFSET | minor)
+#define SXE2_MK_VER_MAJOR(ver) (((ver) >> SXE2_VER_MAJOR_OFFSET) & 0xff)
+#define SXE2_MK_VER_MINOR(ver) ((ver) & 0xff)
+
+#define SXE2_ITR_VER_MAJOR_V100 1
+#define SXE2_ITR_VER_MAJOR_V200 2
+
+#define SXE2_ITR_VER_MAJOR 1
+#define SXE2_ITR_VER_MINOR 1
+#define SXE2_ITR_VER SXE2_MK_VER(SXE2_ITR_VER_MAJOR, SXE2_ITR_VER_MINOR)
+
+#define SXE2_CTRL_VER_IS_V100(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V100)
+#define SXE2_CTRL_VER_IS_V200(ver) (SXE2_MK_VER_MAJOR(ver) == SXE2_ITR_VER_MAJOR_V200)
+
+#define SXE2LIB_ITR_VER_MAJOR 1
+#define SXE2LIB_ITR_VER_MINOR 1
+#define SXE2LIB_ITR_VER SXE2_MK_VER(SXE2LIB_ITR_VER_MAJOR, SXE2LIB_ITR_VER_MINOR)
+
+#define SXE2_DRV_CLI_VER_MAJOR 1
+#define SXE2_DRV_CLI_VER_MINOR 1
+#define SXE2_DRV_CLI_VER \
+ SXE2_MK_VER(SXE2_DRV_CLI_VER_MAJOR, SXE2_DRV_CLI_VER_MINOR)
+
+#endif /* __SXE2_INTERNAL_VER_H__ */
diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h
new file mode 100644
index 0000000000..3bea8fbf85
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_osal.h
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_OSAL_H__
+#define __SXE2_OSAL_H__
+#include <string.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_malloc.h>
+#include <rte_ether.h>
+#include <rte_version.h>
+#include <rte_bitops.h>
+
+#ifndef __BITS_PER_LONG
+#define __BITS_PER_LONG (__SIZEOF_LONG__ * 8)
+#endif
+#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG)
+#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG))
+
+#define BITS_PER_BYTE 8
+
+#define IS_UNICAST_ETHER_ADDR(addr) \
+ ((bool)((((uint8_t *)(addr))[0] % ((uint8_t)0x2)) == 0))
+
+#define STRUCT_SIZE(ptr, field, num) \
+ (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num))
+
+#ifndef TAILQ_FOREACH_SAFE
+#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+ for ((var) = TAILQ_FIRST((head)); \
+ (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+ (var) = (tvar))
+#endif
+
+#define SXE2_QUEUE_WAIT_RETRY_CNT (50)
+
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((uint32_t)((n) & 0xffffffff))
+
+#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f)
+#define ARRAY_SIZE(arr) RTE_DIM(arr)
+
+#ifndef DIV_ROUND_UP
+#define DIV_ROUND_UP(n, d) \
+ (((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d))
+#endif
+
+enum sxe2_itr_idx {
+ SXE2_ITR_IDX_0 = 0,
+ SXE2_ITR_IDX_1,
+ SXE2_ITR_IDX_2,
+ SXE2_ITR_IDX_NONE,
+};
+
+#define ETH_P_8021Q 0x8100
+#define ETH_P_8021AD 0x88a8
+#define ETH_P_QINQ1 0x9100
+
+#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long))
+#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32)
+
+#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1)))
+
+#define DECLARE_BITMAP(name, bits) \
+ unsigned long name[BITS_TO_LONGS(bits)]
+#define BITMAP_TYPE unsigned long
+
+static inline void sxe2_set_bit(uint32_t nr, unsigned long *addr)
+{
+ addr[nr / __BITS_PER_LONG] |= 1UL << (nr % __BITS_PER_LONG);
+}
+
+static inline void sxe2_clear_bit(uint32_t nr, unsigned long *addr)
+{
+ addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG));
+}
+
+static inline uint32_t sxe2_test_bit(uint32_t nr, const volatile unsigned long *addr)
+{
+ return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1)));
+}
+
+#endif /* __SXE2_OSAL_H */
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 04/11] drivers: add base driver skeleton
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (2 preceding siblings ...)
2026-05-16 7:46 ` [PATCH v15 03/11] common/sxe2: add sxe2 basic structures liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 05/11] drivers: add base driver probe skeleton liujie5
` (6 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Initialize the sxe2 PMD skeleton by implementing the PCI probe and
remove functions. This includes the setup and cleanup of a character
device used for control path communication between the user space
and the hardware.
The character device provides an interface for ioctl-based management
operations, supporting device-specific configuration.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/meson.build | 15 +
drivers/common/sxe2/sxe2_common.c | 635 +++++++++++++++++++++
drivers/common/sxe2/sxe2_common.h | 85 +++
drivers/common/sxe2/sxe2_host_regs.h | 226 ++++----
drivers/common/sxe2/sxe2_ioctl_chnl.c | 160 ++++++
drivers/common/sxe2/sxe2_ioctl_chnl.h | 130 +++++
drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 44 ++
drivers/common/sxe2/sxe2_osal.h | 4 +-
drivers/meson.build | 1 +
9 files changed, 1185 insertions(+), 115 deletions(-)
create mode 100644 drivers/common/sxe2/meson.build
create mode 100644 drivers/common/sxe2/sxe2_common.c
create mode 100644 drivers/common/sxe2/sxe2_common.h
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.c
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl.h
create mode 100644 drivers/common/sxe2/sxe2_ioctl_chnl_func.h
diff --git a/drivers/common/sxe2/meson.build b/drivers/common/sxe2/meson.build
new file mode 100644
index 0000000000..f1cc1205a0
--- /dev/null
+++ b/drivers/common/sxe2/meson.build
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright (c) 2023 Corigine, Inc.
+
+if is_windows
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+deps += ['bus_pci', 'net', 'eal', 'ethdev']
+
+sources = files(
+ 'sxe2_common.c',
+ 'sxe2_ioctl_chnl.c',
+)
diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c
new file mode 100644
index 0000000000..27c33b1186
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_common.c
@@ -0,0 +1,635 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_version.h>
+#include <rte_pci.h>
+#include <rte_dev.h>
+#include <rte_devargs.h>
+#include <rte_class.h>
+#include <rte_malloc.h>
+#include <rte_errno.h>
+#include <rte_fbarray.h>
+#include <rte_eal.h>
+#include <eal_private.h>
+#include <eal_memcfg.h>
+#include <bus_driver.h>
+#include <bus_pci_driver.h>
+#include <eal_export.h>
+#include <pthread.h>
+
+#include "sxe2_common.h"
+#include "sxe2_common_log.h"
+#include "sxe2_ioctl_chnl_func.h"
+
+static TAILQ_HEAD(sxe2_class_drivers, sxe2_class_driver) sxe2_class_drivers_list =
+ TAILQ_HEAD_INITIALIZER(sxe2_class_drivers_list);
+
+static TAILQ_HEAD(sxe2_common_devices, sxe2_common_device) sxe2_common_devices_list =
+ TAILQ_HEAD_INITIALIZER(sxe2_common_devices_list);
+
+static pthread_mutex_t sxe2_common_devices_list_lock;
+
+static struct rte_pci_id *sxe2_common_pci_id_table;
+
+static const struct {
+ const char *name;
+ uint32_t class_type;
+} sxe2_class_types[] = {
+ { .name = "eth", .class_type = SXE2_CLASS_TYPE_ETH },
+ { .name = "vdpa", .class_type = SXE2_CLASS_TYPE_VDPA },
+};
+
+static uint32_t sxe2_class_name_to_value(const char *class_name)
+{
+ uint32_t class_type = SXE2_CLASS_TYPE_INVALID;
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(sxe2_class_types); i++) {
+ if (strcmp(class_name, sxe2_class_types[i].name) == 0)
+ class_type = sxe2_class_types[i].class_type;
+ }
+
+ return class_type;
+}
+
+static struct sxe2_common_device *sxe2_rtedev_to_cdev(struct rte_device *rte_dev)
+{
+ struct sxe2_common_device *cdev = NULL;
+
+ TAILQ_FOREACH(cdev, &sxe2_common_devices_list, next) {
+ if (rte_dev == cdev->dev)
+ goto l_end;
+ }
+
+ cdev = NULL;
+l_end:
+ return cdev;
+}
+
+static struct sxe2_class_driver *sxe2_class_driver_get(uint32_t class_type)
+{
+ struct sxe2_class_driver *cdrv = NULL;
+
+ TAILQ_FOREACH(cdrv, &sxe2_class_drivers_list, next) {
+ if (cdrv->drv_class == class_type)
+ goto l_end;
+ }
+
+ cdrv = NULL;
+l_end:
+ return cdrv;
+}
+
+static int32_t sxe2_kvargs_preprocessing(struct sxe2_dev_kvargs_info *kv_info,
+ const struct rte_devargs *devargs)
+{
+ const struct rte_kvargs_pair *pair;
+ struct rte_kvargs *kvlist;
+ int32_t ret = -1;
+ uint32_t i;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ for (i = 0; i < kvlist->count; i++) {
+ pair = &kvlist->pairs[i];
+ if (pair->value == NULL || *(pair->value) == '\0') {
+ PMD_LOG_ERR(COM, "Key %s has no value.", pair->key);
+ rte_kvargs_free(kvlist);
+ ret = -EINVAL;
+ goto l_end;
+ }
+ }
+
+ kv_info->kvlist = kvlist;
+ ret = 0;
+ PMD_LOG_DEBUG(COM, "kvargs %d preprocessing success.",
+ kv_info->kvlist->count);
+l_end:
+ return ret;
+}
+
+static void sxe2_kvargs_free(struct sxe2_dev_kvargs_info *kv_info)
+{
+ if ((kv_info != NULL) && (kv_info->kvlist != NULL)) {
+ rte_kvargs_free(kv_info->kvlist);
+ kv_info->kvlist = NULL;
+ }
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_kvargs_process)
+int32_t
+sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info,
+ const char *const key_match, arg_handler_t handler, void *opaque_arg)
+{
+ const struct rte_kvargs_pair *pair;
+ struct rte_kvargs *kvlist;
+ uint32_t i;
+ int32_t ret = 0;
+
+ if ((kv_info == NULL) || (kv_info->kvlist == NULL) ||
+ (key_match == NULL)) {
+ PMD_LOG_ERR(COM, "Failed to process kvargs, NULL parameter.");
+ ret = -EINVAL;
+ goto l_end;
+ }
+ kvlist = kv_info->kvlist;
+
+ for (i = 0; i < kvlist->count; i++) {
+ pair = &kvlist->pairs[i];
+ if (strcmp(pair->key, key_match) == 0) {
+ ret = (*handler)(pair->key, pair->value, opaque_arg);
+ if (ret)
+ goto l_end;
+
+ kv_info->is_used[i] = true;
+ break;
+ }
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_parse_class_type(const char *key, const char *value, void *args)
+{
+ uint32_t *class_type = (uint32_t *)args;
+ int32_t ret = 0;
+
+ *class_type = sxe2_class_name_to_value(value);
+ if (*class_type == SXE2_CLASS_TYPE_INVALID) {
+ ret = -EINVAL;
+ PMD_LOG_ERR(COM, "Unsupported %s type: %s", key, value);
+ }
+
+ return ret;
+}
+
+static int32_t sxe2_common_device_setup(struct sxe2_common_device *cdev)
+{
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev);
+ int32_t ret = 0;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ goto l_end;
+
+ ret = sxe2_drv_dev_open(cdev, pci_dev);
+ if (ret != 0) {
+ PMD_LOG_ERR(COM, "Open pmd chrdev failed, ret=%d", ret);
+ goto l_end;
+ }
+
+ ret = sxe2_drv_dev_handshark(cdev);
+ if (ret != 0) {
+ PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret);
+ goto l_close_dev;
+ }
+
+ goto l_end;
+
+l_close_dev:
+ sxe2_drv_dev_close(cdev);
+l_end:
+ return ret;
+}
+
+static void sxe2_common_device_cleanup(struct sxe2_common_device *cdev)
+{
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return;
+
+ if (TAILQ_EMPTY(&sxe2_common_devices_list))
+ (void)rte_mem_event_callback_unregister("SXE2_MEM_ENVENT_CB", NULL);
+
+ sxe2_drv_dev_close(cdev);
+}
+
+static struct sxe2_common_device *sxe2_common_device_alloc(
+ struct rte_device *rte_dev, uint32_t class_type)
+{
+ struct sxe2_common_device *cdev = NULL;
+
+ cdev = rte_zmalloc("sxe2_common_device", sizeof(*cdev), 0);
+ if (cdev == NULL) {
+ PMD_LOG_ERR(COM, "Fail to alloc sxe2 common device.");
+ goto l_end;
+ }
+ cdev->dev = rte_dev;
+ cdev->class_type = class_type;
+ cdev->config.kernel_reset = false;
+ pthread_mutex_init(&cdev->config.lock, NULL);
+
+ (void)pthread_mutex_lock(&sxe2_common_devices_list_lock);
+ TAILQ_INSERT_TAIL(&sxe2_common_devices_list, cdev, next);
+ pthread_mutex_unlock(&sxe2_common_devices_list_lock);
+
+l_end:
+ return cdev;
+}
+
+static void sxe2_common_device_free(struct sxe2_common_device *cdev)
+{
+
+ (void)pthread_mutex_lock(&sxe2_common_devices_list_lock);
+ TAILQ_REMOVE(&sxe2_common_devices_list, cdev, next);
+ pthread_mutex_unlock(&sxe2_common_devices_list_lock);
+
+ rte_free(cdev);
+}
+
+static bool sxe2_dev_is_pci(const struct rte_device *dev)
+{
+ return strcmp(dev->bus->name, "pci") == 0;
+}
+
+static bool sxe2_dev_pci_id_match(const struct sxe2_class_driver *cdrv,
+ const struct rte_device *dev)
+{
+ const struct rte_pci_device *pci_dev;
+ const struct rte_pci_id *id_table;
+ bool ret = false;
+
+ if (!sxe2_dev_is_pci(dev)) {
+ PMD_LOG_ERR(COM, "Device %s is not a PCI device", dev->name);
+ goto l_end;
+ }
+
+ pci_dev = RTE_DEV_TO_PCI_CONST(dev);
+ for (id_table = cdrv->id_table; id_table->vendor_id != 0;
+ id_table++) {
+
+ if (id_table->vendor_id != pci_dev->id.vendor_id &&
+ id_table->vendor_id != RTE_PCI_ANY_ID) {
+ continue;
+ }
+ if (id_table->device_id != pci_dev->id.device_id &&
+ id_table->device_id != RTE_PCI_ANY_ID) {
+ continue;
+ }
+ if (id_table->subsystem_vendor_id !=
+ pci_dev->id.subsystem_vendor_id &&
+ id_table->subsystem_vendor_id != RTE_PCI_ANY_ID) {
+ continue;
+ }
+ if (id_table->subsystem_device_id !=
+ pci_dev->id.subsystem_device_id &&
+ id_table->subsystem_device_id != RTE_PCI_ANY_ID) {
+
+ continue;
+ }
+ if (id_table->class_id != pci_dev->id.class_id &&
+ id_table->class_id != RTE_CLASS_ANY_ID) {
+ continue;
+ }
+ ret = true;
+ break;
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_classes_driver_probe(struct sxe2_common_device *cdev,
+ struct sxe2_dev_kvargs_info *kv_info, uint32_t class_type)
+{
+ struct sxe2_class_driver *cdrv = NULL;
+ int32_t ret = -1;
+
+ cdrv = sxe2_class_driver_get(class_type);
+ if (cdrv == NULL) {
+ PMD_LOG_ERR(COM, "Fail to get class type[%u] driver.", class_type);
+ goto l_end;
+ }
+
+ if (!sxe2_dev_pci_id_match(cdrv, cdev->dev)) {
+ PMD_LOG_ERR(COM, "Fail to match pci id for driver:%s.", cdrv->name);
+ goto l_end;
+ }
+
+ ret = cdrv->probe(cdev, kv_info);
+ if (ret) {
+
+ PMD_LOG_DEBUG(COM, "Fail to probe driver:%s.", cdrv->name);
+ goto l_end;
+ }
+
+ cdev->cdrv = cdrv;
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_classes_driver_remove(struct sxe2_common_device *cdev)
+{
+ struct sxe2_class_driver *cdrv = cdev->cdrv;
+
+ return cdrv->remove(cdev);
+}
+
+static int32_t sxe2_kvargs_validate(struct sxe2_dev_kvargs_info *kv_info)
+{
+ int32_t ret = 0;
+ uint32_t i;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ goto l_end;
+
+ if (kv_info == NULL)
+ goto l_end;
+
+ for (i = 0; i < kv_info->kvlist->count; i++) {
+ if (kv_info->is_used[i] == 0) {
+ PMD_LOG_ERR(COM, "Key \"%s\" is unsupported for the class driver.",
+ kv_info->kvlist->pairs[i].key);
+ ret = -EINVAL;
+ goto l_end;
+ }
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_common_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ struct rte_device *rte_dev = &pci_dev->device;
+ struct sxe2_common_device *cdev;
+ struct sxe2_dev_kvargs_info *kv_info_p = NULL;
+
+ uint32_t class_type = SXE2_CLASS_TYPE_ETH;
+ int32_t ret = -1;
+
+ PMD_LOG_INFO(COM, "Probe pci device: %s", pci_dev->name);
+
+ cdev = sxe2_rtedev_to_cdev(rte_dev);
+ if (cdev != NULL) {
+ PMD_LOG_ERR(COM, "Device %s already probed.", rte_dev->name);
+ ret = -EBUSY;
+ goto l_end;
+ }
+
+ if ((rte_dev->devargs != NULL) && (rte_dev->devargs->args != NULL)) {
+ kv_info_p = calloc(1, sizeof(struct sxe2_dev_kvargs_info));
+ if (!kv_info_p) {
+ PMD_LOG_ERR(COM, "Failed to allocate memory for kv_info");
+ goto l_end;
+ }
+
+ ret = sxe2_kvargs_preprocessing(kv_info_p, rte_dev->devargs);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Unsupported device args: %s",
+ rte_dev->devargs->args);
+ goto l_free_kvargs;
+ }
+
+ ret = sxe2_kvargs_process(kv_info_p, SXE2_DEVARGS_KEY_CLASS,
+ sxe2_parse_class_type, &class_type);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Unsupported sxe2 driver class: %s",
+ rte_dev->devargs->args);
+ goto l_free_args;
+ }
+
+ }
+
+ cdev = sxe2_common_device_alloc(rte_dev, class_type);
+ if (cdev == NULL) {
+ ret = -ENOMEM;
+ goto l_free_args;
+ }
+
+ ret = sxe2_common_device_setup(cdev);
+ if (ret != 0)
+ goto l_err_setup;
+
+ ret = sxe2_classes_driver_probe(cdev, kv_info_p, class_type);
+ if (ret != 0)
+ goto l_err_probe;
+
+ ret = sxe2_kvargs_validate(kv_info_p);
+ if (ret != 0) {
+ PMD_LOG_ERR(COM, "Device args validate failed: %s",
+ rte_dev->devargs->args);
+ goto l_err_valid;
+ }
+ cdev->kvargs = kv_info_p;
+
+ goto l_end;
+l_err_valid:
+ (void)sxe2_classes_driver_remove(cdev);
+l_err_probe:
+ sxe2_common_device_cleanup(cdev);
+l_err_setup:
+ sxe2_common_device_free(cdev);
+l_free_args:
+ sxe2_kvargs_free(kv_info_p);
+l_free_kvargs:
+ free(kv_info_p);
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_common_pci_remove(struct rte_pci_device *pci_dev)
+{
+ struct sxe2_common_device *cdev;
+ int32_t ret = -1;
+
+ PMD_LOG_INFO(COM, "Remove pci device: %s", pci_dev->name);
+ cdev = sxe2_rtedev_to_cdev(&pci_dev->device);
+ if (cdev == NULL) {
+ ret = -ENODEV;
+ PMD_LOG_ERR(COM, "Fail to get remove device.");
+ goto l_end;
+ }
+
+ ret = sxe2_classes_driver_remove(cdev);
+ if (ret != 0) {
+ PMD_LOG_ERR(COM, "Fail to remove device: %s", pci_dev->name);
+ goto l_end;
+ }
+
+ sxe2_common_device_cleanup(cdev);
+
+ if (cdev->kvargs != NULL) {
+ sxe2_kvargs_free(cdev->kvargs);
+ free(cdev->kvargs);
+ cdev->kvargs = NULL;
+ }
+
+ sxe2_common_device_free(cdev);
+
+l_end:
+ return ret;
+}
+
+static struct rte_pci_driver sxe2_common_pci_driver = {
+ .driver = {
+ .name = SXE2_COMMON_PCI_DRIVER_NAME,
+ },
+ .probe = sxe2_common_pci_probe,
+ .remove = sxe2_common_pci_remove,
+};
+
+static uint32_t sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table)
+{
+ uint32_t table_size = 0;
+
+ while (id_table->vendor_id != 0) {
+ table_size++;
+ id_table++;
+ }
+
+ return table_size;
+}
+
+static bool sxe2_common_pci_id_exists(const struct rte_pci_id *id,
+ const struct rte_pci_id *id_table, uint32_t next_idx)
+{
+ int32_t current_size = next_idx - 1;
+ int32_t i;
+ bool exists = false;
+
+ for (i = 0; i < current_size; i++) {
+ if ((id->device_id == id_table[i].device_id) &&
+ (id->vendor_id == id_table[i].vendor_id) &&
+ (id->subsystem_vendor_id == id_table[i].subsystem_vendor_id) &&
+ (id->subsystem_device_id == id_table[i].subsystem_device_id)) {
+ exists = true;
+ break;
+ }
+ }
+
+ return exists;
+}
+
+static void sxe2_common_pci_id_insert(struct rte_pci_id *id_table,
+ uint32_t *next_idx, const struct rte_pci_id *insert_table)
+{
+ for (; insert_table->vendor_id != 0; insert_table++) {
+ if (!sxe2_common_pci_id_exists(insert_table, id_table, *next_idx)) {
+
+ id_table[*next_idx] = *insert_table;
+ (*next_idx)++;
+ }
+ }
+}
+
+static int32_t sxe2_common_pci_id_table_update(const struct rte_pci_id *id_table)
+{
+ const struct rte_pci_id *id_iter;
+ struct rte_pci_id *updated_table;
+ struct rte_pci_id *old_table;
+ uint32_t num_ids = 0;
+ uint32_t i = 0;
+ int32_t ret = 0;
+
+ old_table = sxe2_common_pci_id_table;
+ if (old_table)
+ num_ids = sxe2_common_pci_id_table_size_get(old_table);
+
+ num_ids += sxe2_common_pci_id_table_size_get(id_table);
+
+ num_ids += 1;
+
+ updated_table = calloc(num_ids, sizeof(*updated_table));
+ if (!updated_table) {
+ PMD_LOG_ERR(COM, "Failed to allocate memory for PCI ID table");
+ goto l_end;
+ }
+
+ if (old_table == NULL) {
+
+ for (id_iter = id_table; id_iter->vendor_id != 0;
+ id_iter++, i++)
+ updated_table[i] = *id_iter;
+ } else {
+
+ for (id_iter = old_table; id_iter->vendor_id != 0;
+ id_iter++, i++)
+ updated_table[i] = *id_iter;
+
+ sxe2_common_pci_id_insert(updated_table, &i, id_table);
+ }
+
+ updated_table[i].vendor_id = 0;
+ sxe2_common_pci_driver.id_table = updated_table;
+ sxe2_common_pci_id_table = updated_table;
+ free(old_table);
+
+l_end:
+ return ret;
+}
+
+static void sxe2_common_driver_on_register_pci(struct sxe2_class_driver *driver)
+{
+ if (driver->id_table != NULL) {
+ if (sxe2_common_pci_id_table_update(driver->id_table) != 0)
+ return;
+ }
+
+ if (driver->intr_lsc)
+ sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_LSC;
+ if (driver->intr_rmv)
+ sxe2_common_pci_driver.drv_flags |= RTE_PCI_DRV_INTR_RMV;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_class_driver_register)
+void
+sxe2_class_driver_register(struct sxe2_class_driver *driver)
+{
+ sxe2_common_driver_on_register_pci(driver);
+ TAILQ_INSERT_TAIL(&sxe2_class_drivers_list, driver, next);
+}
+
+static void sxe2_common_pci_init(void)
+{
+ const struct rte_pci_id empty_table[] = {
+ {
+ .vendor_id = 0
+ },
+ };
+ int32_t ret = -1;
+
+ if (sxe2_common_pci_id_table == NULL) {
+ ret = sxe2_common_pci_id_table_update(empty_table);
+ if (ret != 0)
+ goto l_end;
+ }
+ rte_pci_register(&sxe2_common_pci_driver);
+
+l_end:
+ return;
+}
+
+static bool sxe2_commoin_inited;
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init)
+void
+sxe2_common_init(void)
+{
+ if (sxe2_commoin_inited)
+ goto l_end;
+
+ pthread_mutex_init(&sxe2_common_devices_list_lock, NULL);
+ sxe2_common_pci_init();
+ sxe2_commoin_inited = true;
+
+l_end:
+ return;
+}
+
+RTE_FINI(sxe2_common_pci_finish)
+{
+ if (sxe2_common_pci_id_table != NULL) {
+ rte_pci_unregister(&sxe2_common_pci_driver);
+ free(sxe2_common_pci_id_table);
+ }
+}
+
+RTE_PMD_EXPORT_NAME(sxe2_common_pci);
+
+RTE_LOG_REGISTER_SUFFIX(sxe2_common_log, com, NOTICE);
diff --git a/drivers/common/sxe2/sxe2_common.h b/drivers/common/sxe2/sxe2_common.h
new file mode 100644
index 0000000000..5fe218db99
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_common.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_COMMON_H__
+#define __SXE2_COMMON_H__
+
+#include <rte_bitops.h>
+#include <rte_kvargs.h>
+#include <rte_compat.h>
+#include <rte_memory.h>
+#include <rte_ticketlock.h>
+#include <pthread.h>
+
+#define SXE2_COMMON_PCI_DRIVER_NAME "sxe2_pci"
+
+#define SXE2_CDEV_TO_CMD_FD(cdev) \
+ ((cdev)->config.cmd_fd)
+
+#define SXE2_DEVARGS_KEY_CLASS "class"
+
+struct sxe2_class_driver;
+
+enum sxe2_class_type {
+ SXE2_CLASS_TYPE_ETH = 0,
+ SXE2_CLASS_TYPE_VDPA,
+ SXE2_CLASS_TYPE_INVALID,
+};
+
+struct sxe2_common_dev_config {
+ int32_t cmd_fd;
+ bool support_iommu;
+ bool kernel_reset;
+ pthread_mutex_t lock;
+};
+
+struct sxe2_common_device {
+ struct rte_device *dev;
+ TAILQ_ENTRY(sxe2_common_device) next;
+ struct sxe2_class_driver *cdrv;
+ enum sxe2_class_type class_type;
+ struct sxe2_common_dev_config config;
+ struct sxe2_dev_kvargs_info *kvargs;
+};
+
+struct sxe2_dev_kvargs_info {
+ struct rte_kvargs *kvlist;
+ bool is_used[RTE_KVARGS_MAX];
+};
+
+typedef int32_t (sxe2_class_driver_probe_t)(struct sxe2_common_device *scdev,
+ struct sxe2_dev_kvargs_info *kvargs);
+
+typedef int32_t (sxe2_class_driver_remove_t)(struct sxe2_common_device *scdev);
+
+struct sxe2_class_driver {
+ TAILQ_ENTRY(sxe2_class_driver) next;
+ enum sxe2_class_type drv_class;
+ const char *name;
+ sxe2_class_driver_probe_t *probe;
+ sxe2_class_driver_remove_t *remove;
+ const struct rte_pci_id *id_table;
+ uint32_t intr_lsc;
+ uint32_t intr_rmv;
+};
+
+__rte_internal
+void
+sxe2_common_mem_event_cb(enum rte_mem_event type,
+ const void *addr, size_t size, void *arg __rte_unused);
+
+__rte_internal
+void
+sxe2_class_driver_register(struct sxe2_class_driver *driver);
+
+__rte_internal
+void
+sxe2_common_init(void);
+
+__rte_internal
+int32_t
+sxe2_kvargs_process(struct sxe2_dev_kvargs_info *kv_info,
+ const char *const key_match, arg_handler_t handler, void *opaque_arg);
+
+#endif /* __SXE2_COMMON_H__ */
diff --git a/drivers/common/sxe2/sxe2_host_regs.h b/drivers/common/sxe2/sxe2_host_regs.h
index 984ea6214c..9116be0ba0 100644
--- a/drivers/common/sxe2/sxe2_host_regs.h
+++ b/drivers/common/sxe2/sxe2_host_regs.h
@@ -15,7 +15,7 @@
#define SXE2_RXQ_CTRL_STATUS_ACTIVE 0x00000004
#define SXE2_RXQ_CTRL_ENABLED 0x00000001
-#define SXE2_RXQ_CTRL_CDE_ENABLE BIT(3)
+#define SXE2_RXQ_CTRL_CDE_ENABLE RTE_BIT32(3)
#define SXE2_PCIEPROC_BASE 0x002d6000
@@ -25,78 +25,78 @@
#define SXE2_PF_INT_ALLOC_LAST_S 12
#define SXE2_PF_INT_ALLOC_LAST \
(0x7FF << SXE2_PF_INT_ALLOC_LAST_S)
-#define SXE2_PF_INT_ALLOC_VALID BIT(31)
+#define SXE2_PF_INT_ALLOC_VALID RTE_BIT32(31)
#define SXE2_PF_INT_OICR (SXE2_PF_INT_BASE + 0x0040)
-#define SXE2_PF_INT_OICR_PCIE_TIMEOUT BIT(0)
-#define SXE2_PF_INT_OICR_UR BIT(1)
-#define SXE2_PF_INT_OICR_CA BIT(2)
-#define SXE2_PF_INT_OICR_VFLR BIT(3)
-#define SXE2_PF_INT_OICR_VFR_DONE BIT(4)
-#define SXE2_PF_INT_OICR_LAN_TX_ERR BIT(5)
-#define SXE2_PF_INT_OICR_BFDE BIT(6)
-#define SXE2_PF_INT_OICR_LAN_RX_ERR BIT(7)
-#define SXE2_PF_INT_OICR_ECC_ERR BIT(8)
-#define SXE2_PF_INT_OICR_GPIO BIT(9)
-#define SXE2_PF_INT_OICR_TSYN_TX BIT(11)
-#define SXE2_PF_INT_OICR_TSYN_EVENT BIT(12)
-#define SXE2_PF_INT_OICR_TSYN_TGT BIT(13)
-#define SXE2_PF_INT_OICR_EXHAUST BIT(14)
-#define SXE2_PF_INT_OICR_FW BIT(15)
-#define SXE2_PF_INT_OICR_SWINT BIT(16)
-#define SXE2_PF_INT_OICR_LINKSEC_CHG BIT(17)
-#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR BIT(18)
-#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR BIT(19)
-#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE BIT(20)
-#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT BIT(21)
-#define SXE2_PF_INT_OICR_GRST BIT(22)
-#define SXE2_PF_INT_OICR_FWQ_INT BIT(29)
-#define SXE2_PF_INT_OICR_FWQ_TOOL_INT BIT(30)
-#define SXE2_PF_INT_OICR_MBXQ_INT BIT(31)
+#define SXE2_PF_INT_OICR_PCIE_TIMEOUT RTE_BIT32(0)
+#define SXE2_PF_INT_OICR_UR RTE_BIT32(1)
+#define SXE2_PF_INT_OICR_CA RTE_BIT32(2)
+#define SXE2_PF_INT_OICR_VFLR RTE_BIT32(3)
+#define SXE2_PF_INT_OICR_VFR_DONE RTE_BIT32(4)
+#define SXE2_PF_INT_OICR_LAN_TX_ERR RTE_BIT32(5)
+#define SXE2_PF_INT_OICR_BFDE RTE_BIT32(6)
+#define SXE2_PF_INT_OICR_LAN_RX_ERR RTE_BIT32(7)
+#define SXE2_PF_INT_OICR_ECC_ERR RTE_BIT32(8)
+#define SXE2_PF_INT_OICR_GPIO RTE_BIT32(9)
+#define SXE2_PF_INT_OICR_TSYN_TX RTE_BIT32(11)
+#define SXE2_PF_INT_OICR_TSYN_EVENT RTE_BIT32(12)
+#define SXE2_PF_INT_OICR_TSYN_TGT RTE_BIT32(13)
+#define SXE2_PF_INT_OICR_EXHAUST RTE_BIT32(14)
+#define SXE2_PF_INT_OICR_FW RTE_BIT32(15)
+#define SXE2_PF_INT_OICR_SWINT RTE_BIT32(16)
+#define SXE2_PF_INT_OICR_LINKSEC_CHG RTE_BIT32(17)
+#define SXE2_PF_INT_OICR_INT_CFG_ADDR_ERR RTE_BIT32(18)
+#define SXE2_PF_INT_OICR_INT_CFG_DATA_ERR RTE_BIT32(19)
+#define SXE2_PF_INT_OICR_INT_CFG_ADR_UNRANGE RTE_BIT32(20)
+#define SXE2_PF_INT_OICR_INT_RAM_CONFLICT RTE_BIT32(21)
+#define SXE2_PF_INT_OICR_GRST RTE_BIT32(22)
+#define SXE2_PF_INT_OICR_FWQ_INT RTE_BIT32(29)
+#define SXE2_PF_INT_OICR_FWQ_TOOL_INT RTE_BIT32(30)
+#define SXE2_PF_INT_OICR_MBXQ_INT RTE_BIT32(31)
#define SXE2_PF_INT_OICR_ENABLE (SXE2_PF_INT_BASE + 0x0020)
#define SXE2_PF_INT_FW_EVENT (SXE2_PF_INT_BASE + 0x0100)
-#define SXE2_PF_INT_FW_ABNORMAL BIT(0)
-#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW BIT(1)
-#define SXE2_PF_INT_CGMAC_LINK_CHG BIT(18)
-#define SXE2_PF_INT_VFLR_DONE BIT(2)
+#define SXE2_PF_INT_FW_ABNORMAL RTE_BIT32(0)
+#define SXE2_PF_INT_RDMA_AEQ_OVERFLOW RTE_BIT32(1)
+#define SXE2_PF_INT_CGMAC_LINK_CHG RTE_BIT32(18)
+#define SXE2_PF_INT_VFLR_DONE RTE_BIT32(2)
#define SXE2_PF_INT_OICR_CTL (SXE2_PF_INT_BASE + 0x0060)
#define SXE2_PF_INT_OICR_CTL_MSIX_IDX 0x7FF
#define SXE2_PF_INT_OICR_CTL_ITR_IDX_S 11
#define SXE2_PF_INT_OICR_CTL_ITR_IDX \
(0x3 << SXE2_PF_INT_OICR_CTL_ITR_IDX_S)
-#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE BIT(30)
+#define SXE2_PF_INT_OICR_CTL_CAUSE_ENABLE RTE_BIT32(30)
#define SXE2_PF_INT_FWQ_CTL (SXE2_PF_INT_BASE + 0x00C0)
#define SXE2_PF_INT_FWQ_CTL_MSIX_IDX 0x7FFF
#define SXE2_PF_INT_FWQ_CTL_ITR_IDX_S 11
#define SXE2_PF_INT_FWQ_CTL_ITR_IDX \
(0x3 << SXE2_PF_INT_FWQ_CTL_ITR_IDX_S)
-#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE BIT(30)
+#define SXE2_PF_INT_FWQ_CTL_CAUSE_ENABLE RTE_BIT32(30)
#define SXE2_PF_INT_MBX_CTL (SXE2_PF_INT_BASE + 0x00A0)
#define SXE2_PF_INT_MBX_CTL_MSIX_IDX 0x7FF
#define SXE2_PF_INT_MBX_CTL_ITR_IDX_S 11
#define SXE2_PF_INT_MBX_CTL_ITR_IDX (0x3 << SXE2_PF_INT_MBX_CTL_ITR_IDX_S)
-#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE BIT(30)
+#define SXE2_PF_INT_MBX_CTL_CAUSE_ENABLE RTE_BIT32(30)
#define SXE2_PF_INT_GPIO_ENA (SXE2_PF_INT_BASE + 0x0100)
-#define SXE2_PF_INT_GPIO_X_ENA(x) BIT(x)
+#define SXE2_PF_INT_GPIO_X_ENA(x) RTE_BIT32(x)
#define SXE2_PFG_INT_CTL (SXE2_PF_INT_BASE + 0x0120)
#define SXE2_PFG_INT_CTL_ITR_GRAN 0x7
#define SXE2_PFG_INT_CTL_ITR_GRAN_0 (2)
-#define SXE2_PFG_INT_CTL_CREDIT_GRAN BIT(4)
+#define SXE2_PFG_INT_CTL_CREDIT_GRAN RTE_BIT32(4)
#define SXE2_PFG_INT_CTL_CREDIT_GRAN_0 (4)
#define SXE2_PFG_INT_CTL_CREDIT_GRAN_1 (8)
#define SXE2_VFG_RAM_INIT_DONE \
(SXE2_PF_INT_BASE + 0x0128)
-#define SXE2_VFG_RAM_INIT_DONE_0 BIT(0)
-#define SXE2_VFG_RAM_INIT_DONE_1 BIT(1)
-#define SXE2_VFG_RAM_INIT_DONE_2 BIT(2)
+#define SXE2_VFG_RAM_INIT_DONE_0 RTE_BIT32(0)
+#define SXE2_VFG_RAM_INIT_DONE_1 RTE_BIT32(1)
+#define SXE2_VFG_RAM_INIT_DONE_2 RTE_BIT32(2)
#define SXE2_LINK_REG_GET_10G_VALUE 4
#define SXE2_LINK_REG_GET_25G_VALUE 1
@@ -129,7 +129,7 @@
#define SXE2_PFVP_INT_ALLOC_LAST_S 12
#define SXE2_PFVP_INT_ALLOC_LAST_M \
(0x7FF << SXE2_PFVP_INT_ALLOC_LAST_S)
-#define SXE2_PFVP_INT_ALLOC_VALID BIT(31)
+#define SXE2_PFVP_INT_ALLOC_VALID RTE_BIT32(31)
#define SXE2_PCI_PFVP_INT_ALLOC(vf_idx) (SXE2_PCIEPROC_BASE + 0x5800 + ((vf_idx) * 4))
#define SXE2_PCI_PFVP_INT_ALLOC_FIRST_S 0
@@ -139,7 +139,7 @@
#define SXE2_PCI_PFVP_INT_ALLOC_LAST_M \
(0x7FF << SXE2_PCI_PFVP_INT_ALLOC_LAST_S)
-#define SXE2_PCI_PFVP_INT_ALLOC_VALID BIT(31)
+#define SXE2_PCI_PFVP_INT_ALLOC_VALID RTE_BIT32(31)
#define SXE2_PCIEPROC_INT2FUNC(_INT) (SXE2_PCIEPROC_BASE + 0xe000 + ((_INT) * 4))
#define SXE2_PCIEPROC_INT2FUNC_VF_NUM_S 0
@@ -147,37 +147,37 @@
#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_S 12
#define SXE2_PCIEPROC_INT2FUNC_PF_NUM_M (0x7 << SXE2_PCIEPROC_INT2FUNC_PF_NUM_S)
#define SXE2_PCIEPROC_INT2FUNC_IS_PF_S 16
-#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M BIT(16)
+#define SXE2_PCIEPROC_INT2FUNC_IS_PF_M RTE_BIT32(16)
#define SXE2_VSI_PF(vf_idx) (SXE2_PF_INT_BASE + 0x14000 + ((vf_idx) * 4))
#define SXE2_VSI_PF_ID_S 0
#define SXE2_VSI_PF_ID_M (0x7 << SXE2_VSI_PF_ID_S)
-#define SXE2_VSI_PF_EN_M BIT(3)
+#define SXE2_VSI_PF_EN_M RTE_BIT32(3)
#define SXE2_MBX_CTL(_VSI) (0x0026692C + ((_VSI) * 4))
#define SXE2_MBX_CTL_MSIX_INDX_S 0
#define SXE2_MBX_CTL_MSIX_INDX_M (0x7FF << SXE2_MBX_CTL_MSIX_INDX_S)
-#define SXE2_MBX_CTL_CAUSE_ENA_M BIT(30)
+#define SXE2_MBX_CTL_CAUSE_ENA_M RTE_BIT32(30)
#define SXE2_PF_INT_TQCTL(q_idx) (SXE2_PF_INT_BASE + 0x092C + 4 * (q_idx))
#define SXE2_PF_INT_TQCTL_MSIX_IDX 0x7FF
#define SXE2_PF_INT_TQCTL_ITR_IDX_S 11
#define SXE2_PF_INT_TQCTL_ITR_IDX \
(0x3 << SXE2_PF_INT_TQCTL_ITR_IDX_S)
-#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE BIT(30)
+#define SXE2_PF_INT_TQCTL_CAUSE_ENABLE RTE_BIT32(30)
#define SXE2_PF_INT_RQCTL(q_idx) (SXE2_PF_INT_BASE + 0x292C + 4 * (q_idx))
#define SXE2_PF_INT_RQCTL_MSIX_IDX 0x7FF
#define SXE2_PF_INT_RQCTL_ITR_IDX_S 11
#define SXE2_PF_INT_RQCTL_ITR_IDX \
(0x3 << SXE2_PF_INT_RQCTL_ITR_IDX_S)
-#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE BIT(30)
+#define SXE2_PF_INT_RQCTL_CAUSE_ENABLE RTE_BIT32(30)
#define SXE2_PF_INT_RATE(irq_idx) (SXE2_PF_INT_BASE + 0x7530 + 4 * (irq_idx))
#define SXE2_PF_INT_RATE_CREDIT_INTERVAL (0x3F)
#define SXE2_PF_INT_RATE_CREDIT_INTERVAL_MAX \
(0x3F)
-#define SXE2_PF_INT_RATE_INTRL_ENABLE (BIT(6))
+#define SXE2_PF_INT_RATE_INTRL_ENABLE (RTE_BIT32(6))
#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT (7)
#define SXE2_PF_INT_RATE_CREDIT_MAX_VALUE \
(0x3F << SXE2_PF_INT_RATE_CREDIT_MAX_VALUE_SHIFT)
@@ -187,20 +187,20 @@
#define SXE2_VF_INT_ITR_INTERVAL 0xFFF
#define SXE2_VF_DYN_CTL(irq_idx) (SXE2_PF_INT_BASE + 0x9530 + 4 * (irq_idx))
-#define SXE2_VF_DYN_CTL_INTENABLE BIT(0)
-#define SXE2_VF_DYN_CTL_CLEARPBA BIT(1)
-#define SXE2_VF_DYN_CTL_SWINT_TRIG BIT(2)
+#define SXE2_VF_DYN_CTL_INTENABLE RTE_BIT32(0)
+#define SXE2_VF_DYN_CTL_CLEARPBA RTE_BIT32(1)
+#define SXE2_VF_DYN_CTL_SWINT_TRIG RTE_BIT32(2)
#define SXE2_VF_DYN_CTL_ITR_IDX_S \
3
#define SXE2_VF_DYN_CTL_ITR_IDX_M 0x3
#define SXE2_VF_DYN_CTL_INTERVAL_S 5
#define SXE2_VF_DYN_CTL_INTERVAL_M 0xFFF
-#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE BIT(24)
+#define SXE2_VF_DYN_CTL_SW_ITR_IDX_ENABLE RTE_BIT32(24)
#define SXE2_VF_DYN_CTL_SW_ITR_IDX_S 25
#define SXE2_VF_DYN_CTL_SW_ITR_IDX_M 0x3
#define SXE2_VF_DYN_CTL_INTENABLE_MSK \
- BIT(31)
+ RTE_BIT32(31)
#define SXE2_BAR4_MSIX_BASE 0
#define SXE2_BAR4_MSIX_CTL(_idx) (SXE2_BAR4_MSIX_BASE + 0xC + ((_idx) * 0x10))
@@ -225,8 +225,8 @@
#define SXE2_TXQ_DIS_CNT(_pfIdx) (0x100D0 + ((_pfIdx) * 0x100))
#define SXE2_TXQ_CTXT_CTRL_USED_MASK 0x00000800
-#define SXE2_TXQ_CTRL_SW_EN_M BIT(0)
-#define SXE2_TXQ_CTRL_HW_EN_M BIT(1)
+#define SXE2_TXQ_CTRL_SW_EN_M RTE_BIT32(0)
+#define SXE2_TXQ_CTRL_HW_EN_M RTE_BIT32(1)
#define SXE2_TXQ_CTXT2_PROT_IDX_S 0
#define SXE2_TXQ_CTXT2_PROT_IDX_M SXE2_BITS_MASK(0x7, 0)
@@ -239,37 +239,37 @@
#define SXE2_TXQ_CTXT2_VMVF_TYPE_S 23
#define SXE2_TXQ_CTXT2_VMVF_TYPE_M SXE2_BITS_MASK(0x3, 23)
#define SXE2_TXQ_CTXT2_TSYN_ENA_S 25
-#define SXE2_TXQ_CTXT2_TSYN_ENA_M BIT(25)
+#define SXE2_TXQ_CTXT2_TSYN_ENA_M RTE_BIT32(25)
#define SXE2_TXQ_CTXT2_ALT_VLAN_S 26
-#define SXE2_TXQ_CTXT2_ALT_VLAN_M BIT(26)
+#define SXE2_TXQ_CTXT2_ALT_VLAN_M RTE_BIT32(26)
#define SXE2_TXQ_CTXT2_WB_MODE_S 27
-#define SXE2_TXQ_CTXT2_WB_MODE_M BIT(27)
+#define SXE2_TXQ_CTXT2_WB_MODE_M RTE_BIT32(27)
#define SXE2_TXQ_CTXT2_ITR_WB_S 28
-#define SXE2_TXQ_CTXT2_ITR_WB_M BIT(28)
+#define SXE2_TXQ_CTXT2_ITR_WB_M RTE_BIT32(28)
#define SXE2_TXQ_CTXT2_LEGACY_EN_S 29
-#define SXE2_TXQ_CTXT2_LEGACY_EN_M BIT(29)
+#define SXE2_TXQ_CTXT2_LEGACY_EN_M RTE_BIT32(29)
#define SXE2_TXQ_CTXT2_SSO_EN_S 30
-#define SXE2_TXQ_CTXT2_SSO_EN_M BIT(30)
+#define SXE2_TXQ_CTXT2_SSO_EN_M RTE_BIT32(30)
#define SXE2_TXQ_CTXT3_SRC_VSI_S 0
#define SXE2_TXQ_CTXT3_SRC_VSI_M SXE2_BITS_MASK(0x3FF, 0)
#define SXE2_TXQ_CTXT3_CPU_ID_S 12
#define SXE2_TXQ_CTXT3_CPU_ID_M SXE2_BITS_MASK(0xFF, 12)
#define SXE2_TXQ_CTXT3_TPH_RDDESC_S 20
-#define SXE2_TXQ_CTXT3_TPH_RDDESC_M BIT(20)
+#define SXE2_TXQ_CTXT3_TPH_RDDESC_M RTE_BIT32(20)
#define SXE2_TXQ_CTXT3_TPH_RDDATA_S 21
-#define SXE2_TXQ_CTXT3_TPH_RDDATA_M BIT(21)
+#define SXE2_TXQ_CTXT3_TPH_RDDATA_M RTE_BIT32(21)
#define SXE2_TXQ_CTXT3_TPH_WRDESC_S 22
-#define SXE2_TXQ_CTXT3_TPH_WRDESC_M BIT(22)
+#define SXE2_TXQ_CTXT3_TPH_WRDESC_M RTE_BIT32(22)
#define SXE2_TXQ_CTXT3_QID_IN_FUNC_S 0
#define SXE2_TXQ_CTXT3_QID_IN_FUNC_M SXE2_BITS_MASK(0x7FF, 0)
#define SXE2_TXQ_CTXT3_RDDESC_RO_S 13
-#define SXE2_TXQ_CTXT3_RDDESC_RO_M BIT(13)
+#define SXE2_TXQ_CTXT3_RDDESC_RO_M RTE_BIT32(13)
#define SXE2_TXQ_CTXT3_WRDESC_RO_S 14
-#define SXE2_TXQ_CTXT3_WRDESC_RO_M BIT(14)
+#define SXE2_TXQ_CTXT3_WRDESC_RO_M RTE_BIT32(14)
#define SXE2_TXQ_CTXT3_RDDATA_RO_S 15
-#define SXE2_TXQ_CTXT3_RDDATA_RO_M BIT(15)
+#define SXE2_TXQ_CTXT3_RDDATA_RO_M RTE_BIT32(15)
#define SXE2_TXQ_CTXT3_QLEN_S 16
#define SXE2_TXQ_CTXT3_QLEN_M SXE2_BITS_MASK(0x1FFF, 16)
@@ -400,16 +400,16 @@ enum {
#define SXE2_PF_CTRLQ_MBX_ARQT (SXE2_PF_CTRLQ_MBX_BASE + 0xE580)
#define SXE2_CMD_REG_LEN_M 0x3FF
-#define SXE2_CMD_REG_LEN_VFE_M BIT(28)
-#define SXE2_CMD_REG_LEN_OVFL_M BIT(29)
-#define SXE2_CMD_REG_LEN_CRIT_M BIT(30)
-#define SXE2_CMD_REG_LEN_ENABLE_M BIT(31)
+#define SXE2_CMD_REG_LEN_VFE_M RTE_BIT32(28)
+#define SXE2_CMD_REG_LEN_OVFL_M RTE_BIT32(29)
+#define SXE2_CMD_REG_LEN_CRIT_M RTE_BIT32(30)
+#define SXE2_CMD_REG_LEN_ENABLE_M RTE_BIT32(31)
#define SXE2_CMD_REG_HEAD_M 0x3FF
#define SXE2_PF_CTRLQ_FW_HW_STS (SXE2_PF_CTRLQ_FW_BASE + 0x0500)
-#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK BIT(0)
-#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK BIT(1)
+#define SXE2_PF_CTRLQ_FW_ATQ_IDLE_MASK RTE_BIT32(0)
+#define SXE2_PF_CTRLQ_FW_ARQ_IDLE_MASK RTE_BIT32(1)
#define SXE2_TOP_CFG_BASE 0x00292000
#define SXE2_HW_VER (SXE2_TOP_CFG_BASE + 0x48c)
@@ -465,26 +465,26 @@ enum {
#define SXE2_L2TAG_ID_VLAN 3
#define SXE2_PFP_L2TAGSEN_ALL_TAG 0xFF
-#define SXE2_PFP_L2TAGSEN_DVM BIT(10)
+#define SXE2_PFP_L2TAGSEN_DVM RTE_BIT32(10)
#define SXE2_VSI_TSR_STRIP_TAG_S 0
#define SXE2_VSI_TSR_SHOW_TAG_S 4
-#define SXE2_VSI_TSR_ID_STAG BIT(0)
-#define SXE2_VSI_TSR_ID_OUT_VLAN1 BIT(1)
-#define SXE2_VSI_TSR_ID_OUT_VLAN2 BIT(2)
-#define SXE2_VSI_TSR_ID_VLAN BIT(3)
+#define SXE2_VSI_TSR_ID_STAG RTE_BIT32(0)
+#define SXE2_VSI_TSR_ID_OUT_VLAN1 RTE_BIT32(1)
+#define SXE2_VSI_TSR_ID_OUT_VLAN2 RTE_BIT32(2)
+#define SXE2_VSI_TSR_ID_VLAN RTE_BIT32(3)
#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_S 0
#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_ID_M 0x7
-#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID BIT(3)
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG1_VALID RTE_BIT32(3)
#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_S 4
#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_ID_M 0x7
-#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID BIT(7)
+#define SXE2_VSI_L2TAGSTXVALID_L2TAG2_VALID RTE_BIT32(7)
#define SXE2_VSI_L2TAGSTXVALID_TIR0_ID_S 16
-#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID BIT(19)
+#define SXE2_VSI_L2TAGSTXVALID_TIR0_VALID RTE_BIT32(19)
#define SXE2_VSI_L2TAGSTXVALID_TIR1_ID_S 20
-#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID BIT(23)
+#define SXE2_VSI_L2TAGSTXVALID_TIR1_VALID RTE_BIT32(23)
#define SXE2_VSI_L2TAGSTXVALID_ID_STAG 0
#define SXE2_VSI_L2TAGSTXVALID_ID_OUT_VLAN1 2
@@ -498,43 +498,43 @@ enum {
#define SXE2_VSI_RX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01074 + ((_i) * 4))
#define SXE2_VSI_TX_SWITCH_CTRL(_i) (SXE2_SWITCH_RG_BASE + 0x01C74 + ((_i) * 4))
-#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE BIT(9)
+#define SXE2_VSI_RX_SW_CTRL_VLAN_PRUNE RTE_BIT32(9)
-#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN BIT(1)
-#define SXE2_VSI_TX_SW_CTRL_LAN_EN BIT(2)
-#define SXE2_VSI_TX_SW_CTRL_MACAS_EN BIT(3)
-#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE BIT(9)
+#define SXE2_VSI_TX_SW_CTRL_LOOPBACK_EN RTE_BIT32(1)
+#define SXE2_VSI_TX_SW_CTRL_LAN_EN RTE_BIT32(2)
+#define SXE2_VSI_TX_SW_CTRL_MACAS_EN RTE_BIT32(3)
+#define SXE2_VSI_TX_SW_CTRL_VLAN_PRUNE RTE_BIT32(9)
#define SXE2_VSI_TAR_UNTAGGED_SHIFT (16)
#define SXE2_PCIE_SYS_READY 0x38c
-#define SXE2_PCIE_SYS_READY_CORER_ASSERT BIT(0)
-#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE BIT(2)
-#define SXE2_PCIE_SYS_READY_R5 BIT(3)
-#define SXE2_PCIE_SYS_READY_STOP_DROP BIT(16)
+#define SXE2_PCIE_SYS_READY_CORER_ASSERT RTE_BIT32(0)
+#define SXE2_PCIE_SYS_READY_STOP_DROP_DONE RTE_BIT32(2)
+#define SXE2_PCIE_SYS_READY_R5 RTE_BIT32(3)
+#define SXE2_PCIE_SYS_READY_STOP_DROP RTE_BIT32(16)
#define SXE2_PCIE_DEV_CTRL_DEV_STATUS 0x78
-#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING BIT(21)
+#define SXE2_PCIE_DEV_CTRL_DEV_STATUS_TRANS_PENDING RTE_BIT32(21)
#define SXE2_TOP_CFG_CORE (SXE2_TOP_CFG_BASE + 0x0630)
#define SXE2_TOP_CFG_CORE_RST_CODE 0x09FBD586
#define SXE2_PFGEN_CTRL (0x00336000)
-#define SXE2_PFGEN_CTRL_PFSWR BIT(0)
+#define SXE2_PFGEN_CTRL_PFSWR RTE_BIT32(0)
#define SXE2_VFGEN_CTRL(_vf) (0x00337000 + ((_vf) * 4))
-#define SXE2_VFGEN_CTRL_VFSWR BIT(0)
+#define SXE2_VFGEN_CTRL_VFSWR RTE_BIT32(0)
#define SXE2_VF_VRC_VFGEN_RSTAT(_vf) (0x00338000 + (_vf)*4)
#define SXE2_VF_VRC_VFGEN_VFRSTAT (0x3)
#define SXE2_VF_VRC_VFGEN_VFRSTAT_VFR (0)
-#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (BIT(0))
-#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (BIT(1))
-#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (BIT(2))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_COMPLETE (RTE_BIT32(0))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_VF_ACTIVE (RTE_BIT32(1))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_MASK (RTE_BIT32(2))
#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF (0x300)
#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_NO_VFR (0)
#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_VFR (1)
-#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (BIT(10))
+#define SXE2_VF_VRC_VFGEN_VFRSTAT_FORVF_MASK (RTE_BIT32(10))
#define SXE2_GLGEN_VFLRSTAT(_reg) (0x0033A000 + ((_reg)*4))
@@ -548,7 +548,7 @@ enum {
#define SXE2_VF_RXQ_BASE_Q_NUM_M (0x7FF << SXE2_VF_RXQ_BASE_Q_NUM_S)
#define SXE2_VF_RXQ_MAPENA(_VF) (0x000b0400 + ((_VF) * 4))
-#define SXE2_VF_RXQ_MAPENA_M BIT(0)
+#define SXE2_VF_RXQ_MAPENA_M RTE_BIT32(0)
#define SXE2_VF_TXQ_BASE(_VF) (0x00040400 + ((_VF) * 4))
#define SXE2_VF_TXQ_BASE_FIRST_Q_S 0
@@ -557,12 +557,12 @@ enum {
#define SXE2_VF_TXQ_BASE_Q_NUM_M (0xFF << SXE2_VF_TXQ_BASE_Q_NUM_S)
#define SXE2_VF_TXQ_MAPENA(_VF) (0x00045000 + ((_VF) * 4))
-#define SXE2_VF_TXQ_MAPENA_M BIT(0)
+#define SXE2_VF_TXQ_MAPENA_M RTE_BIT32(0)
#define PRI_PTP_BASEADDR 0x2a8000
#define GLTSYN (PRI_PTP_BASEADDR + 0x0)
-#define GLTSYN_ENA_M BIT(0)
+#define GLTSYN_ENA_M RTE_BIT32(0)
#define GLTSYN_CMD (PRI_PTP_BASEADDR + 0x4)
#define GLTSYN_CMD_INIT_TIME 0x01
@@ -578,12 +578,12 @@ enum {
#define GLTSYN_SYNC_GEN_PULSE 0x4
#define GLTSYN_SEM (PRI_PTP_BASEADDR + 0xC)
-#define GLTSYN_SEM_BUSY_M BIT(0)
+#define GLTSYN_SEM_BUSY_M RTE_BIT32(0)
#define GLTSYN_STAT (PRI_PTP_BASEADDR + 0x10)
-#define GLTSYN_STAT_EVENT0_M BIT(0)
-#define GLTSYN_STAT_EVENT1_M BIT(1)
-#define GLTSYN_STAT_EVENT2_M BIT(2)
+#define GLTSYN_STAT_EVENT0_M RTE_BIT32(0)
+#define GLTSYN_STAT_EVENT1_M RTE_BIT32(1)
+#define GLTSYN_STAT_EVENT2_M RTE_BIT32(2)
#define GLTSYN_TIME_SUBNS (PRI_PTP_BASEADDR + 0x20)
#define GLTSYN_TIME_NS (PRI_PTP_BASEADDR + 0x24)
@@ -616,19 +616,19 @@ enum {
#define GLTSYN_AUXOUT(_i) \
(PRI_PTP_BASEADDR + 0xD0 + ((_i) * 4))
-#define GLTSYN_AUXOUT_OUT_ENA BIT(0)
+#define GLTSYN_AUXOUT_OUT_ENA RTE_BIT32(0)
#define GLTSYN_AUXOUT_OUT_MOD (0x03 << 1)
-#define GLTSYN_AUXOUT_OUTLVL BIT(3)
-#define GLTSYN_AUXOUT_INT_ENA BIT(4)
+#define GLTSYN_AUXOUT_OUTLVL RTE_BIT32(3)
+#define GLTSYN_AUXOUT_INT_ENA RTE_BIT32(4)
#define GLTSYN_AUXOUT_PULSEW (0x1fff << 3)
#define GLTSYN_CLKO(_i) \
(PRI_PTP_BASEADDR + 0xE0 + ((_i) * 4))
#define GLTSYN_AUXIN(_i) (PRI_PTP_BASEADDR + 0xF4 + ((_i) * 4))
-#define GLTSYN_AUXIN_RISING_EDGE BIT(0)
-#define GLTSYN_AUXIN_FALLING_EDGE BIT(1)
-#define GLTSYN_AUXIN_ENABLE BIT(4)
+#define GLTSYN_AUXIN_RISING_EDGE RTE_BIT32(0)
+#define GLTSYN_AUXIN_FALLING_EDGE RTE_BIT32(1)
+#define GLTSYN_AUXIN_ENABLE RTE_BIT32(4)
#define CGMAC_CSR_BASE 0x2B4000
@@ -654,13 +654,13 @@ enum {
#define SXE2_VF_GLINT_CEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0)
#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_S 11
#define SXE2_VF_GLINT_CEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11)
-#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M BIT(30)
+#define SXE2_VF_GLINT_CEQCTL_CAUSE_ENA_M RTE_BIT32(30)
#define SXE2_VF_GLINT_CEQCTL(_INT) (0x0026492C + ((_INT) * 4))
#define SXE2_VF_PFINT_AEQCTL_MSIX_INDX_M SXE2_BITS_MASK(0x7FF, 0)
#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_S 11
#define SXE2_VF_VPINT_AEQCTL_ITR_INDX_M SXE2_BITS_MASK(0x3, 11)
-#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M BIT(30)
+#define SXE2_VF_VPINT_AEQCTL_CAUSE_ENA_M RTE_BIT32(30)
#define SXE2_VF_VPINT_AEQCTL(_VF) (0x0026052c + ((_VF) * 4))
#define SXE2_IPSEC_TX_BASE (0x2A0000)
@@ -704,4 +704,4 @@ enum {
#define SXE2_RXPFCXOFFFRAMES_LO(port, pri) (SXE2_PAUSE_STATS_BASE(port) + \
(0x0b70 + 8 * (pri)))
-#endif
+#endif /* __SXE2_HOST_REGS_H__ */
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c
new file mode 100644
index 0000000000..4c2bc452ff
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <rte_version.h>
+#include <eal_export.h>
+
+#include "sxe2_osal.h"
+#include "sxe2_common_log.h"
+#include "sxe2_ioctl_chnl.h"
+#include "sxe2_ioctl_chnl_func.h"
+
+#define SXE2_CHR_DEV_NAME "/dev/sxe2-dpdk-"
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_close)
+void
+sxe2_drv_cmd_close(struct sxe2_common_device *cdev)
+{
+ cdev->config.kernel_reset = true;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_cmd_exec)
+int32_t
+sxe2_drv_cmd_exec(struct sxe2_common_device *cdev,
+ struct sxe2_drv_cmd_params *cmd_params)
+{
+ int32_t cmd_fd;
+ int32_t ret = -EIO;
+
+ if (cdev->config.kernel_reset) {
+ ret = -EPERM;
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_end;
+ }
+
+ cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev);
+ if (cmd_fd < 0) {
+ ret = -EBADF;
+ PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] error", cmd_fd);
+ goto l_end;
+ }
+
+ PMD_LOG_DEBUG(COM, "Exec drv cmd fd[%d] trace_id[0x%"PRIx64"]"
+ "opcode[0x%x] req_len[%d] resp_len[%d]",
+ cmd_fd, cmd_params->trace_id, cmd_params->opcode,
+ cmd_params->req_len, cmd_params->resp_len);
+
+ (void)pthread_mutex_lock(&cdev->config.lock);
+ ret = ioctl(cmd_fd, SXE2_COM_CMD_PASSTHROUGH, cmd_params);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Fail to exec cmd, fd[%d] opcode[0x%x] ret[%d], err:%s",
+ cmd_fd, cmd_params->opcode, ret, strerror(errno));
+ ret = -errno;
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+ goto l_end;
+ }
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+
+l_end:
+ return ret;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_open)
+int32_t
+sxe2_drv_dev_open(struct sxe2_common_device *cdev, struct rte_pci_device *pci_dev)
+{
+ int32_t ret = 0;
+ int32_t fd = 0;
+ char drv_name[32] = {0};
+
+ snprintf(drv_name, sizeof(drv_name),
+ "%s%04"PRIx32":%02"PRIx8":%02"PRIx8".%"PRIx8,
+ SXE2_CHR_DEV_NAME,
+ pci_dev->addr.domain,
+ pci_dev->addr.bus,
+ pci_dev->addr.devid,
+ pci_dev->addr.function);
+
+ fd = open(drv_name, O_RDWR);
+ if (fd < 0) {
+ ret = -EBADF;
+ PMD_LOG_ERR(COM, "Fail to open device:%s, ret=%d, err:%s",
+ drv_name, ret, strerror(errno));
+ goto l_end;
+ }
+
+ SXE2_CDEV_TO_CMD_FD(cdev) = fd;
+
+ PMD_LOG_INFO(COM, "Successfully opened device:%s, fd=%d",
+ drv_name, SXE2_CDEV_TO_CMD_FD(cdev));
+
+l_end:
+ return ret;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_close)
+void
+sxe2_drv_dev_close(struct sxe2_common_device *cdev)
+{
+ int32_t fd = SXE2_CDEV_TO_CMD_FD(cdev);
+
+ if (fd >= 0)
+ close(fd);
+ PMD_LOG_INFO(COM, "closed device fd=%d", fd);
+ SXE2_CDEV_TO_CMD_FD(cdev) = -1;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark)
+int32_t
+sxe2_drv_dev_handshark(struct sxe2_common_device *cdev)
+{
+ int32_t ret = 0;
+ int32_t cmd_fd = 0;
+ struct sxe2_ioctl_cmd_common_hdr cmd_params;
+
+ if (cdev->config.kernel_reset) {
+ ret = -EPERM;
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_end;
+ }
+
+ cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev);
+ if (cmd_fd < 0) {
+ ret = -EBADF;
+ PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd);
+ goto l_end;
+ }
+
+ PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd);
+
+ memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr));
+ cmd_params.dpdk_ver = SXE2_COM_VER;
+ cmd_params.msg_len = sizeof(struct sxe2_ioctl_cmd_common_hdr);
+
+ (void)pthread_mutex_lock(&cdev->config.lock);
+ ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s",
+ cmd_fd, ret, strerror(errno));
+ ret = -EIO;
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+ goto l_end;
+ }
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+
+ if (cmd_params.cap & RTE_BIT32(SXE2_COM_CAP_IOMMU_MAP))
+ cdev->config.support_iommu = true;
+ else
+ cdev->config.support_iommu = false;
+
+l_end:
+ return ret;
+}
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.h b/drivers/common/sxe2/sxe2_ioctl_chnl.h
new file mode 100644
index 0000000000..2560349e70
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.h
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_IOCTL_CHNL_H__
+#define __SXE2_IOCTL_CHNL_H__
+
+#include <rte_version.h>
+#include <bus_pci_driver.h>
+
+#include "sxe2_internal_ver.h"
+
+#define SXE2_COM_INVAL_uint32_t 0xFFFFFFFF
+
+#define SXE2_COM_PCI_OFFSET_SHIFT 40
+
+#define SXE2_COM_PCI_INDEX_TO_OFFSET(index) ((uint64_t)(index) << SXE2_COM_PCI_OFFSET_SHIFT)
+#define SXE2_COM_PCI_OFFSET_MASK (((uint64_t)(1) << SXE2_COM_PCI_OFFSET_SHIFT) - 1)
+#define SXE2_COM_PCI_OFFSET_GEN(index, off) ((((uint64_t)(index)) << SXE2_COM_PCI_OFFSET_SHIFT) | \
+ (((uint64_t)(off)) & SXE2_COM_PCI_OFFSET_MASK))
+
+#define SXE2_DRV_TRACE_ID_COUNT_MASK 0x003FFFFFFFFFFFFFLLU
+
+#define SXE2_DRV_CMD_DFLT_TIMEOUT (30)
+
+#define SXE2_COM_VER_MAJOR 1
+#define SXE2_COM_VER_MINOR 0
+#define SXE2_COM_VER SXE2_MK_VER(SXE2_COM_VER_MAJOR, SXE2_COM_VER_MINOR)
+
+enum SXE2_COM_CMD {
+ SXE2_DEVICE_HANDSHAKE = 1,
+ SXE2_DEVICE_IO_IRQS_REQ,
+ SXE2_DEVICE_EVT_IRQ_REQ,
+ SXE2_DEVICE_RST_IRQ_REQ,
+ SXE2_DEVICE_EVT_CAUSE_GET,
+ SXE2_DEVICE_DMA_MAP,
+ SXE2_DEVICE_DMA_UNMAP,
+ SXE2_DEVICE_PASSTHROUGH,
+ SXE2_DEVICE_MAX,
+};
+
+#define SXE2_CMD_TYPE 'S'
+
+#define SXE2_COM_CMD_HANDSHAKE _IO(SXE2_CMD_TYPE, SXE2_DEVICE_HANDSHAKE)
+#define SXE2_COM_CMD_IO_IRQS_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_IO_IRQS_REQ)
+#define SXE2_COM_CMD_EVT_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_IRQ_REQ)
+#define SXE2_COM_CMD_RST_IRQ_REQ _IO(SXE2_CMD_TYPE, SXE2_DEVICE_RST_IRQ_REQ)
+#define SXE2_COM_CMD_EVT_CAUSE_GET _IO(SXE2_CMD_TYPE, SXE2_DEVICE_EVT_CAUSE_GET)
+#define SXE2_COM_CMD_DMA_MAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_MAP)
+#define SXE2_COM_CMD_DMA_UNMAP _IO(SXE2_CMD_TYPE, SXE2_DEVICE_DMA_UNMAP)
+#define SXE2_COM_CMD_PASSTHROUGH _IO(SXE2_CMD_TYPE, SXE2_DEVICE_PASSTHROUGH)
+
+enum sxe2_com_cap {
+ SXE2_COM_CAP_IOMMU_MAP = 0,
+};
+
+struct sxe2_ioctl_cmd_common_hdr {
+ uint32_t dpdk_ver;
+ uint32_t drv_ver;
+ uint32_t msg_len;
+ uint32_t cap;
+ uint8_t reserved[32];
+};
+
+struct sxe2_drv_cmd_params {
+ uint64_t trace_id;
+ uint32_t timeout;
+ uint32_t opcode;
+ uint16_t vsi_id;
+ uint16_t repr_id;
+ uint32_t req_len;
+ uint32_t resp_len;
+ void *req_data;
+ void *resp_data;
+ uint8_t resv[32];
+};
+
+struct sxe2_ioctl_irq_set {
+ uint32_t cnt;
+ uint8_t resv[4];
+ uint32_t base_irq_in_com;
+ int32_t *event_fd;
+};
+
+enum sxe2_com_event_cause {
+ SXE2_COM_EC_LINK_CHG = 0,
+ SXE2_COM_SW_MODE_LEGACY,
+ SXE2_COM_SW_MODE_SWITCHDEV,
+ SXE2_COM_FC_ST_CHANGE,
+
+ SXE2_COM_EC_RESET = 62,
+ SXE2_COM_EC_MAX = 63,
+};
+
+struct sxe2_ioctl_other_evt_set {
+ int32_t eventfd;
+ uint8_t resv[4];
+ uint64_t filter_table;
+};
+
+struct sxe2_ioctl_other_evt_get {
+ uint64_t evt_cause;
+ uint8_t resv[8];
+};
+
+struct sxe2_ioctl_reset_sub_set {
+ int32_t eventfd;
+ uint8_t resv[4];
+};
+
+struct sxe2_ioctl_iommu_dma_map {
+ uint64_t vaddr;
+ uint64_t iova;
+ uint64_t size;
+ uint8_t resv[4];
+};
+
+struct sxe2_ioctl_iommu_dma_unmap {
+ uint64_t iova;
+};
+
+union sxe2_drv_trace_info {
+ uint64_t id;
+ struct {
+ uint64_t count : 54;
+ uint64_t cpu_id : 10;
+ } sxe2_drv_trace_id_param;
+};
+
+#endif /* __SXE2_IOCTL_CHNL_H__ */
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
new file mode 100644
index 0000000000..055229b0c3
--- /dev/null
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_IOCTL_CHNL_FUNC_H__
+#define __SXE2_IOCTL_CHNL_FUNC_H__
+
+#include <rte_version.h>
+#include <bus_pci_driver.h>
+
+#include "sxe2_common.h"
+#include "sxe2_ioctl_chnl.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+__rte_internal
+void
+sxe2_drv_cmd_close(struct sxe2_common_device *cdev);
+
+__rte_internal
+int32_t
+sxe2_drv_cmd_exec(struct sxe2_common_device *cdev,
+ struct sxe2_drv_cmd_params *cmd_params);
+
+__rte_internal
+int32_t
+sxe2_drv_dev_open(struct sxe2_common_device *cdev,
+ struct rte_pci_device *pci_dev);
+
+__rte_internal
+void
+sxe2_drv_dev_close(struct sxe2_common_device *cdev);
+
+__rte_internal
+int32_t
+sxe2_drv_dev_handshark(struct sxe2_common_device *cdev);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __SXE2_IOCTL_CHNL_FUNC_H__ */
diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h
index 3bea8fbf85..58a8dbc522 100644
--- a/drivers/common/sxe2/sxe2_osal.h
+++ b/drivers/common/sxe2/sxe2_osal.h
@@ -63,7 +63,7 @@ enum sxe2_itr_idx {
#define ETH_P_QINQ1 0x9100
#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long))
-#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, 32)
+#define BITS_TO_uint32_t(nr) DIV_ROUND_UP(nr, 32)
#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1)))
@@ -86,4 +86,4 @@ static inline uint32_t sxe2_test_bit(uint32_t nr, const volatile unsigned long *
return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1)));
}
-#endif /* __SXE2_OSAL_H */
+#endif /* __SXE2_OSAL_H__ */
diff --git a/drivers/meson.build b/drivers/meson.build
index 6ae102e943..d4ae512bae 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -12,6 +12,7 @@ subdirs = [
'common/qat', # depends on bus.
'common/sfc_efx', # depends on bus.
'common/zsda', # depends on bus.
+ 'common/sxe2', # depends on bus.
'mempool', # depends on common and bus.
'dma', # depends on common and bus.
'net', # depends on common, bus, mempool
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 05/11] drivers: add base driver probe skeleton
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (3 preceding siblings ...)
2026-05-16 7:46 ` [PATCH v15 04/11] drivers: add base driver skeleton liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 06/11] drivers: support PCI BAR mapping liujie5
` (5 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Initialize the eth_dev_ops for the sxe2 PMD. This includes the
implementation of mandatory ethdev operations such as dev_configure,
dev_start, dev_stop, and dev_infos_get.
Set up the basic infrastructure for device initialization to allow
the driver to be recognized as a valid ethernet device within the
DPDK framework.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/sxe2_common.c | 2 +-
drivers/common/sxe2/sxe2_ioctl_chnl.c | 33 +-
drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 11 +-
drivers/common/sxe2/sxe2_osal.h | 28 +-
drivers/net/meson.build | 1 +
drivers/net/sxe2/meson.build | 23 +
drivers/net/sxe2/sxe2_cmd_chnl.c | 323 +++++++++++
drivers/net/sxe2/sxe2_cmd_chnl.h | 37 ++
drivers/net/sxe2/sxe2_drv_cmd.h | 388 +++++++++++++
drivers/net/sxe2/sxe2_ethdev.c | 613 +++++++++++++++++++++
drivers/net/sxe2/sxe2_ethdev.h | 293 ++++++++++
drivers/net/sxe2/sxe2_irq.h | 48 ++
drivers/net/sxe2/sxe2_queue.c | 38 ++
drivers/net/sxe2/sxe2_queue.h | 191 +++++++
drivers/net/sxe2/sxe2_txrx_common.h | 540 ++++++++++++++++++
drivers/net/sxe2/sxe2_txrx_poll.h | 16 +
drivers/net/sxe2/sxe2_vsi.c | 214 +++++++
drivers/net/sxe2/sxe2_vsi.h | 204 +++++++
18 files changed, 2976 insertions(+), 27 deletions(-)
create mode 100644 drivers/net/sxe2/meson.build
create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.c
create mode 100644 drivers/net/sxe2/sxe2_cmd_chnl.h
create mode 100644 drivers/net/sxe2/sxe2_drv_cmd.h
create mode 100644 drivers/net/sxe2/sxe2_ethdev.c
create mode 100644 drivers/net/sxe2/sxe2_ethdev.h
create mode 100644 drivers/net/sxe2/sxe2_irq.h
create mode 100644 drivers/net/sxe2/sxe2_queue.c
create mode 100644 drivers/net/sxe2/sxe2_queue.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_common.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.h
create mode 100644 drivers/net/sxe2/sxe2_vsi.c
create mode 100644 drivers/net/sxe2/sxe2_vsi.h
diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c
index 27c33b1186..5cf43dd3b7 100644
--- a/drivers/common/sxe2/sxe2_common.c
+++ b/drivers/common/sxe2/sxe2_common.c
@@ -183,7 +183,7 @@ static int32_t sxe2_common_device_setup(struct sxe2_common_device *cdev)
goto l_end;
}
- ret = sxe2_drv_dev_handshark(cdev);
+ ret = sxe2_drv_dev_handshke(cdev);
if (ret != 0) {
PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret);
goto l_close_dev;
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c
index 4c2bc452ff..11e24d04d9 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl.c
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c
@@ -112,9 +112,9 @@ sxe2_drv_dev_close(struct sxe2_common_device *cdev)
SXE2_CDEV_TO_CMD_FD(cdev) = -1;
}
-RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshark)
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_handshke)
int32_t
-sxe2_drv_dev_handshark(struct sxe2_common_device *cdev)
+sxe2_drv_dev_handshke(struct sxe2_common_device *cdev)
{
int32_t ret = 0;
int32_t cmd_fd = 0;
@@ -144,7 +144,7 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev)
if (ret < 0) {
PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s",
cmd_fd, ret, strerror(errno));
- ret = -EIO;
+ ret = -errno;
(void)pthread_mutex_unlock(&cdev->config.lock);
goto l_end;
}
@@ -158,3 +158,30 @@ sxe2_drv_dev_handshark(struct sxe2_common_device *cdev)
l_end:
return ret;
}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap)
+int32_t
+sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len)
+{
+ int32_t ret = 0;
+
+ if (cdev->config.kernel_reset) {
+ ret = -EPERM;
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_end;
+ }
+
+ PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx",
+ virt, len);
+
+ ret = munmap(virt, len);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s",
+ virt, len, strerror(errno));
+ ret = -errno;
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
index 055229b0c3..710ca1a8d0 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
@@ -35,7 +35,16 @@ sxe2_drv_dev_close(struct sxe2_common_device *cdev);
__rte_internal
int32_t
-sxe2_drv_dev_handshark(struct sxe2_common_device *cdev);
+sxe2_drv_dev_handshke(struct sxe2_common_device *cdev);
+
+__rte_internal
+void
+*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, uint8_t bar_idx,
+ uint64_t len, uint64_t offset);
+
+__rte_internal
+int32_t
+sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len);
#ifdef __cplusplus
}
diff --git a/drivers/common/sxe2/sxe2_osal.h b/drivers/common/sxe2/sxe2_osal.h
index 58a8dbc522..e16ee8c7f8 100644
--- a/drivers/common/sxe2/sxe2_osal.h
+++ b/drivers/common/sxe2/sxe2_osal.h
@@ -23,14 +23,6 @@
#define BIT_WORD(nr) ((nr) / __BITS_PER_LONG)
#define BIT_MASK(nr) (1UL << ((nr) % __BITS_PER_LONG))
-#define BITS_PER_BYTE 8
-
-#define IS_UNICAST_ETHER_ADDR(addr) \
- ((bool)((((uint8_t *)(addr))[0] % ((uint8_t)0x2)) == 0))
-
-#define STRUCT_SIZE(ptr, field, num) \
- (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num))
-
#ifndef TAILQ_FOREACH_SAFE
#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \
for ((var) = TAILQ_FIRST((head)); \
@@ -38,16 +30,11 @@
(var) = (tvar))
#endif
-#define SXE2_QUEUE_WAIT_RETRY_CNT (50)
-
#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
#define lower_32_bits(n) ((uint32_t)((n) & 0xffffffff))
-#define FIELD_SIZEOF(t, f) RTE_SIZEOF_FIELD(t, f)
-#define ARRAY_SIZE(arr) RTE_DIM(arr)
-
-#ifndef DIV_ROUND_UP
-#define DIV_ROUND_UP(n, d) \
+#ifndef SXE2_DIV_ROUND_UP
+#define SXE2_DIV_ROUND_UP(n, d) \
(((n) + (typeof(n))(d) - (typeof(n))1) / (typeof(n))(d))
#endif
@@ -58,12 +45,9 @@ enum sxe2_itr_idx {
SXE2_ITR_IDX_NONE,
};
-#define ETH_P_8021Q 0x8100
-#define ETH_P_8021AD 0x88a8
-#define ETH_P_QINQ1 0x9100
-
-#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long))
-#define BITS_TO_uint32_t(nr) DIV_ROUND_UP(nr, 32)
+#define SXE2_ETH_ALEN 6
+#define BITS_TO_LONGS(nr) SXE2_DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(unsigned long))
+#define BITS_TO_uint32_t(nr) SXE2_DIV_ROUND_UP(nr, 32)
#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (__BITS_PER_LONG - 1)))
@@ -81,7 +65,7 @@ static inline void sxe2_clear_bit(uint32_t nr, unsigned long *addr)
addr[nr / __BITS_PER_LONG] &= ~(1UL << (nr % __BITS_PER_LONG));
}
-static inline uint32_t sxe2_test_bit(uint32_t nr, const volatile unsigned long *addr)
+static inline uint32_t sxe2_test_bit(uint32_t nr, const unsigned long *addr)
{
return 1UL & (addr[BIT_WORD(nr)] >> (nr & (__BITS_PER_LONG-1)));
}
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index c7dae4ad27..4e8ccb945f 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -58,6 +58,7 @@ drivers = [
'rnp',
'sfc',
'softnic',
+ 'sxe2',
'tap',
'thunderx',
'txgbe',
diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build
new file mode 100644
index 0000000000..00c38b147c
--- /dev/null
+++ b/drivers/net/sxe2/meson.build
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+
+if is_windows
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+cflags += ['-g']
+
+deps += ['common_sxe2', 'hash','cryptodev','security']
+
+includes += include_directories('../../common/sxe2')
+
+sources += files(
+ 'sxe2_ethdev.c',
+ 'sxe2_cmd_chnl.c',
+ 'sxe2_vsi.c',
+ 'sxe2_queue.c',
+)
+
+allow_internal_get_api = true
diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.c b/drivers/net/sxe2/sxe2_cmd_chnl.c
new file mode 100644
index 0000000000..d16b6528d0
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_cmd_chnl.c
@@ -0,0 +1,323 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include "sxe2_ioctl_chnl_func.h"
+#include "sxe2_drv_cmd.h"
+#include "sxe2_cmd_chnl.h"
+#include "sxe2_ethdev.h"
+#include "sxe2_common_log.h"
+
+static union sxe2_drv_trace_info sxe2_drv_trace_id;
+
+static void sxe2_drv_trace_id_alloc(uint64_t *trace_id)
+{
+ union sxe2_drv_trace_info *trace = NULL;
+ uint64_t trace_id_count = 0;
+
+ trace = &sxe2_drv_trace_id;
+
+ trace_id_count = trace->sxe2_drv_trace_id_param.count;
+ ++trace_id_count;
+ trace->sxe2_drv_trace_id_param.count =
+ (trace_id_count & SXE2_DRV_TRACE_ID_COUNT_MASK);
+
+ *trace_id = trace->id;
+}
+
+static void __sxe2_drv_cmd_params_fill(struct sxe2_adapter *adapter,
+ struct sxe2_drv_cmd_params *cmd, uint32_t opc, const char *opc_str,
+ void *in_data, uint32_t in_len, void *out_data, uint32_t out_len)
+{
+ PMD_DEV_LOG_DEBUG(adapter, DRV, "cmd opcode:%s", opc_str);
+ cmd->timeout = SXE2_DRV_CMD_DFLT_TIMEOUT;
+ cmd->opcode = opc;
+ cmd->vsi_id = adapter->vsi_ctxt.dpdk_vsi_id;
+ cmd->repr_id = (adapter->repr_priv_data != NULL) ?
+ adapter->repr_priv_data->repr_id : 0xFFFF;
+ cmd->req_len = in_len;
+ cmd->req_data = in_data;
+ cmd->resp_len = out_len;
+ cmd->resp_data = out_data;
+
+ sxe2_drv_trace_id_alloc(&cmd->trace_id);
+}
+
+#define sxe2_drv_cmd_params_fill(adapter, cmd, opc, in_data, in_len, out_data, out_len) \
+ __sxe2_drv_cmd_params_fill(adapter, cmd, opc, #opc, in_data, in_len, out_data, out_len)
+
+
+int32_t sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter, struct sxe2_drv_dev_caps_resp *dev_caps)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_CAPS,
+ NULL, 0, dev_caps,
+ sizeof(struct sxe2_drv_dev_caps_resp));
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "get dev caps failed, ret=%d", ret);
+
+ return ret;
+}
+
+int32_t sxe2_drv_dev_info_get(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_info_resp *dev_info_resp)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_INFO,
+ NULL, 0, dev_info_resp,
+ sizeof(struct sxe2_drv_dev_info_resp));
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "get dev info failed, ret=%d", ret);
+
+ return ret;
+}
+
+int32_t sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_DEV_GET_FW_INFO,
+ NULL, 0, dev_fw_info_resp,
+ sizeof(struct sxe2_drv_dev_fw_info_resp));
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "get dev fw info failed, ret=%d", ret);
+
+ return ret;
+}
+
+int32_t sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_vsi_create_req_resp vsi_req = {0};
+ struct sxe2_drv_vsi_create_req_resp vsi_resp = {0};
+
+ vsi_req.vsi_id = vsi->vsi_id;
+
+ vsi_req.used_queues.queues_cnt = RTE_MIN(vsi->txqs.q_cnt, vsi->rxqs.q_cnt);
+ vsi_req.used_queues.base_idx_in_pf = vsi->txqs.base_idx_in_func;
+ vsi_req.used_msix.msix_vectors_cnt = vsi->irqs.avail_cnt;
+ vsi_req.used_msix.base_idx_in_func = vsi->irqs.base_idx_in_pf;
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_CREATE,
+ &vsi_req, sizeof(struct sxe2_drv_vsi_create_req_resp),
+ &vsi_resp, sizeof(struct sxe2_drv_vsi_create_req_resp));
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret) {
+ PMD_DEV_LOG_ERR(adapter, DRV, "dev add vsi failed, ret=%d", ret);
+ goto l_end;
+ }
+
+ vsi->vsi_id = vsi_resp.vsi_id;
+ vsi->vsi_type = vsi_resp.vsi_type;
+
+l_end:
+ return ret;
+}
+
+int32_t sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_vsi_free_req vsi_req = {0};
+
+ vsi_req.vsi_id = vsi->vsi_id;
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_VSI_FREE,
+ &vsi_req, sizeof(struct sxe2_drv_vsi_free_req),
+ NULL, 0);
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "dev del vsi failed, ret=%d", ret);
+
+ return ret;
+}
+
+#define SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN (1 << 7)
+#define SXE2_RX_HDR_SIZE 256
+
+static int32_t sxe2_rxq_ctxt_cfg_fill(struct sxe2_rx_queue *rxq,
+ struct sxe2_drv_rxq_cfg_req *req, uint16_t rxq_cnt)
+{
+ struct sxe2_adapter *adapter = rxq->vsi->adapter;
+ struct sxe2_drv_rxq_ctxt *ctxt = req->cfg;
+ struct rte_eth_dev_data *dev_data = adapter->dev_info.dev_data;
+ int32_t ret = 0;
+
+ req->vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id;
+ req->q_cnt = rxq_cnt;
+ req->max_frame_size = dev_data->mtu + SXE2_ETH_OVERHEAD;
+
+ ctxt->queue_id = rxq->queue_id;
+ ctxt->depth = rxq->ring_depth;
+ ctxt->buf_len = RTE_ALIGN(rxq->rx_buf_len, SXE2_RXQ_CTXT_CFG_BUF_LEN_ALIGN);
+ ctxt->dma_addr = rxq->base_addr;
+
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
+ ctxt->lro_en = 1;
+ ctxt->max_lro_size = dev_data->dev_conf.rxmode.max_lro_pkt_size;
+ } else {
+ ctxt->lro_en = 0;
+ }
+
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ ctxt->keep_crc_en = 1;
+ else
+ ctxt->keep_crc_en = 0;
+
+ ctxt->desc_size = sizeof(union sxe2_rx_desc);
+ return ret;
+}
+
+int32_t sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter,
+ struct sxe2_rx_queue *rxq,
+ uint16_t rxq_cnt)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_rxq_cfg_req *req = NULL;
+ uint16_t len = 0;
+
+ len = sizeof(*req) + rxq_cnt * sizeof(struct sxe2_drv_rxq_ctxt);
+ req = rte_zmalloc("sxe2_rxq_cfg", len, 0);
+ if (req == NULL) {
+ PMD_LOG_ERR(RX, "rxq cfg mem alloc failed");
+ ret = -ENOMEM;
+ goto l_end;
+ }
+
+ ret = sxe2_rxq_ctxt_cfg_fill(rxq, req, rxq_cnt);
+ if (ret) {
+ PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_CFG_ENABLE,
+ req, len, NULL, 0);
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "rxq cfg failed, ret=%d", ret);
+
+l_end:
+ if (req)
+ rte_free(req);
+ return ret;
+}
+
+static void sxe2_txq_ctxt_cfg_fill(struct sxe2_tx_queue *txq,
+ struct sxe2_drv_txq_cfg_req *req,
+ uint16_t txq_cnt)
+{
+ struct sxe2_drv_txq_ctxt *ctxt = req->cfg;
+ uint16_t q_idx = 0;
+
+ req->vsi_id = txq->vsi->vsi_id;
+ req->q_cnt = txq_cnt;
+
+ for (q_idx = 0; q_idx < txq_cnt; q_idx++) {
+ ctxt = &req->cfg[q_idx];
+ ctxt->depth = txq[q_idx].ring_depth;
+ ctxt->dma_addr = txq[q_idx].base_addr;
+ ctxt->queue_id = txq[q_idx].queue_id;
+ }
+}
+
+int32_t sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter,
+ struct sxe2_tx_queue *txq,
+ uint16_t txq_cnt)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_txq_cfg_req *req;
+ uint16_t len = 0;
+
+ len = sizeof(*req) + txq_cnt * sizeof(struct sxe2_drv_txq_ctxt);
+ req = rte_zmalloc("sxe2_txq_cfg", len, 0);
+ if (req == NULL) {
+ PMD_LOG_ERR(TX, "txq cfg mem alloc failed");
+ ret = -ENOMEM;
+ goto l_end;
+ }
+
+ sxe2_txq_ctxt_cfg_fill(txq, req, txq_cnt);
+
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_CFG_ENABLE,
+ req, len, NULL, 0);
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "txq cfg failed, ret=%d", ret);
+
+l_end:
+ if (req)
+ rte_free(req);
+ return ret;
+}
+
+int32_t sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_q_switch_req req;
+
+ req.vsi_id = rte_cpu_to_le_16(rxq->vsi->vsi_id);
+ req.q_idx = rxq->queue_id;
+
+ req.is_enable = (uint8_t)enable;
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_RXQ_DISABLE,
+ &req, sizeof(req), NULL, 0);
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret)
+ PMD_DEV_LOG_ERR(adapter, DRV, "rxq switch failed, enable: %d, ret:%d",
+ enable, ret);
+
+ return ret;
+}
+
+int32_t sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable)
+{
+ int32_t ret = 0;
+ struct sxe2_common_device *cdev = adapter->cdev;
+ struct sxe2_drv_cmd_params param = {0};
+ struct sxe2_drv_q_switch_req req;
+
+ req.vsi_id = rte_cpu_to_le_16(txq->vsi->vsi_id);
+ req.q_idx = txq->queue_id;
+
+ req.is_enable = (uint8_t)enable;
+ sxe2_drv_cmd_params_fill(adapter, ¶m, SXE2_DRV_CMD_TXQ_DISABLE,
+ &req, sizeof(req), NULL, 0);
+
+ ret = sxe2_drv_cmd_exec(cdev, ¶m);
+ if (ret) {
+ PMD_DEV_LOG_ERR(adapter, DRV, "txq switch failed, enable: %d, ret:%d",
+ enable, ret);
+ }
+
+ return ret;
+}
diff --git a/drivers/net/sxe2/sxe2_cmd_chnl.h b/drivers/net/sxe2/sxe2_cmd_chnl.h
new file mode 100644
index 0000000000..cd41cd9e8d
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_cmd_chnl.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_CMD_CHNL_H__
+#define __SXE2_CMD_CHNL_H__
+
+#include "sxe2_ethdev.h"
+#include "sxe2_drv_cmd.h"
+#include "sxe2_ioctl_chnl_func.h"
+
+int32_t sxe2_drv_dev_caps_get(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_caps_resp *dev_caps);
+
+int32_t sxe2_drv_dev_info_get(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_info_resp *dev_info_resp);
+
+int32_t sxe2_drv_dev_fw_info_get(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_fw_info_resp *dev_fw_info_resp);
+
+int32_t sxe2_drv_vsi_add(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi);
+
+int32_t sxe2_drv_vsi_del(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi);
+
+int32_t sxe2_drv_rxq_switch(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq, bool enable);
+
+int32_t sxe2_drv_txq_switch(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq, bool enable);
+
+int32_t sxe2_drv_rxq_ctxt_cfg(struct sxe2_adapter *adapter,
+ struct sxe2_rx_queue *rxq,
+ uint16_t rxq_cnt);
+
+int32_t sxe2_drv_txq_ctxt_cfg(struct sxe2_adapter *adapter,
+ struct sxe2_tx_queue *txq,
+ uint16_t txq_cnt);
+
+#endif /* __SXE2_CMD_CHNL_H__ */
diff --git a/drivers/net/sxe2/sxe2_drv_cmd.h b/drivers/net/sxe2/sxe2_drv_cmd.h
new file mode 100644
index 0000000000..a16087c6bf
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_drv_cmd.h
@@ -0,0 +1,388 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_DRV_CMD_H__
+#define __SXE2_DRV_CMD_H__
+
+#include "sxe2_osal.h"
+
+#define SXE2_DRV_CMD_MODULE_S (16)
+#define SXE2_MK_DRV_CMD(module, cmd) (((module) << SXE2_DRV_CMD_MODULE_S) | ((cmd) & 0xFFFF))
+
+#define SXE2_DEV_CAPS_OFFLOAD_L2 RTE_BIT32(0)
+#define SXE2_DEV_CAPS_OFFLOAD_VLAN RTE_BIT32(1)
+#define SXE2_DEV_CAPS_OFFLOAD_RSS RTE_BIT32(2)
+#define SXE2_DEV_CAPS_OFFLOAD_IPSEC RTE_BIT32(3)
+#define SXE2_DEV_CAPS_OFFLOAD_FNAV RTE_BIT32(4)
+#define SXE2_DEV_CAPS_OFFLOAD_TM RTE_BIT32(5)
+#define SXE2_DEV_CAPS_OFFLOAD_PTP RTE_BIT32(6)
+#define SXE2_DEV_CAPS_OFFLOAD_Q_MAP RTE_BIT32(7)
+#define SXE2_DEV_CAPS_OFFLOAD_FC_STATE RTE_BIT32(8)
+
+#define SXE2_TXQ_STATS_MAP_MAX_NUM 16
+#define SXE2_RXQ_STATS_MAP_MAX_NUM 4
+#define SXE2_RXQ_MAP_Q_MAX_NUM 256
+
+#define SXE2_STAT_MAP_INVALID_QID 0xFFFF
+
+#define SXE2_SCHED_MODE_DEFAULT 0
+#define SXE2_SCHED_MODE_TM 1
+#define SXE2_SCHED_MODE_HIGH_PERFORMANCE 2
+#define SXE2_SCHED_MODE_INVALID 3
+
+#define SXE2_SRCVSI_PRUNE_MAX_NUM 2
+
+#define SXE2_PTYPE_UNKNOWN RTE_BIT32(0)
+#define SXE2_PTYPE_L2_ETHER RTE_BIT32(1)
+#define SXE2_PTYPE_L3_IPV4 RTE_BIT32(2)
+#define SXE2_PTYPE_L3_IPV6 RTE_BIT32(4)
+#define SXE2_PTYPE_L4_TCP RTE_BIT32(6)
+#define SXE2_PTYPE_L4_UDP RTE_BIT32(7)
+#define SXE2_PTYPE_L4_SCTP RTE_BIT32(8)
+#define SXE2_PTYPE_INNER_L2_ETHER RTE_BIT32(9)
+#define SXE2_PTYPE_INNER_L3_IPV4 RTE_BIT32(10)
+#define SXE2_PTYPE_INNER_L3_IPV6 RTE_BIT32(12)
+#define SXE2_PTYPE_INNER_L4_TCP RTE_BIT32(14)
+#define SXE2_PTYPE_INNER_L4_UDP RTE_BIT32(15)
+#define SXE2_PTYPE_INNER_L4_SCTP RTE_BIT32(16)
+#define SXE2_PTYPE_TUNNEL_GRENAT RTE_BIT32(17)
+
+#define SXE2_PTYPE_L2_MASK (SXE2_PTYPE_L2_ETHER)
+#define SXE2_PTYPE_L3_MASK (SXE2_PTYPE_L3_IPV4 | SXE2_PTYPE_L3_IPV6)
+#define SXE2_PTYPE_L4_MASK (SXE2_PTYPE_L4_TCP | SXE2_PTYPE_L4_UDP | \
+ SXE2_PTYPE_L4_SCTP)
+#define SXE2_PTYPE_INNER_L2_MASK (SXE2_PTYPE_INNER_L2_ETHER)
+#define SXE2_PTYPE_INNER_L3_MASK (SXE2_PTYPE_INNER_L3_IPV4 | \
+ SXE2_PTYPE_INNER_L3_IPV6)
+#define SXE2_PTYPE_INNER_L4_MASK (SXE2_PTYPE_INNER_L4_TCP | \
+ SXE2_PTYPE_INNER_L4_UDP | \
+ SXE2_PTYPE_INNER_L4_SCTP)
+#define SXE2_PTYPE_TUNNEL_MASK (SXE2_PTYPE_TUNNEL_GRENAT)
+
+enum sxe2_dev_type {
+ SXE2_DEV_T_PF = 0,
+ SXE2_DEV_T_VF,
+ SXE2_DEV_T_PF_BOND,
+ SXE2_DEV_T_MAX,
+};
+
+struct sxe2_drv_queue_caps {
+ uint16_t queues_cnt;
+ uint16_t base_idx_in_pf;
+};
+
+struct sxe2_drv_msix_caps {
+ uint16_t msix_vectors_cnt;
+ uint16_t base_idx_in_func;
+};
+
+struct sxe2_drv_rss_hash_caps {
+ uint16_t hash_key_size;
+ uint16_t lut_key_size;
+};
+
+enum sxe2_vf_vsi_valid {
+ SXE2_VF_VSI_BOTH = 0,
+ SXE2_VF_VSI_ONLY_DPDK,
+ SXE2_VF_VSI_ONLY_KERNEL,
+ SXE2_VF_VSI_MAX,
+};
+
+struct sxe2_drv_vsi_caps {
+ uint16_t func_id;
+ uint16_t dpdk_vsi_id;
+ uint16_t kernel_vsi_id;
+ uint16_t vsi_type;
+};
+
+struct sxe2_drv_representor_caps {
+ uint16_t cnt_repr_vf;
+ uint8_t rsv[2];
+ struct sxe2_drv_vsi_caps repr_vf_id[256];
+};
+
+enum sxe2_phys_port_name_type {
+ SXE2_PHYS_PORT_NAME_TYPE_NOTSET = 0,
+ SXE2_PHYS_PORT_NAME_TYPE_LEGACY,
+ SXE2_PHYS_PORT_NAME_TYPE_UPLINK,
+ SXE2_PHYS_PORT_NAME_TYPE_PFVF,
+
+ SXE2_PHYS_PORT_NAME_TYPE_UNKNOWN,
+};
+
+struct sxe2_switchdev_mode_info {
+ uint8_t pf_id;
+ uint8_t is_switchdev;
+ uint8_t rsv[2];
+};
+
+struct sxe2_switchdev_cpvsi_info {
+ uint16_t cp_vsi_id;
+ uint8_t rsv[2];
+};
+
+struct sxe2_txsch_caps {
+ uint8_t layer_cap;
+ uint8_t tm_mid_node_num;
+ uint8_t prio_num;
+ uint8_t rev;
+};
+
+struct sxe2_drv_dev_caps_resp {
+ struct sxe2_drv_queue_caps queue_caps;
+ struct sxe2_drv_msix_caps msix_caps;
+ struct sxe2_drv_rss_hash_caps rss_hash_caps;
+ struct sxe2_drv_vsi_caps vsi_caps;
+ struct sxe2_txsch_caps txsch_caps;
+ struct sxe2_drv_representor_caps repr_caps;
+ uint8_t port_idx;
+ uint8_t pf_idx;
+ uint8_t dev_type;
+ uint8_t rev;
+ uint32_t cap_flags;
+};
+
+struct sxe2_drv_dev_info_resp {
+ uint64_t dsn;
+ uint16_t vsi_id;
+ uint8_t rsv[2];
+ uint8_t mac_addr[SXE2_ETH_ALEN];
+ uint8_t rsv2[2];
+};
+
+struct sxe2_drv_dev_fw_info_resp {
+ uint8_t main_version_id;
+ uint8_t sub_version_id;
+ uint8_t fix_version_id;
+ uint8_t build_id;
+};
+
+struct sxe2_drv_rxq_ctxt {
+ uint64_t dma_addr;
+ uint32_t max_lro_size;
+ uint32_t split_type_mask;
+ uint16_t hdr_len;
+ uint16_t buf_len;
+ uint16_t depth;
+ uint16_t queue_id;
+ uint8_t lro_en;
+ uint8_t keep_crc_en;
+ uint8_t split_en;
+ uint8_t desc_size;
+};
+
+struct sxe2_drv_rxq_cfg_req {
+ uint16_t q_cnt;
+ uint16_t vsi_id;
+ uint16_t max_frame_size;
+ uint8_t rsv[2];
+ struct sxe2_drv_rxq_ctxt cfg[];
+};
+
+struct sxe2_drv_txq_ctxt {
+ uint64_t dma_addr;
+ uint32_t sched_mode;
+ uint16_t queue_id;
+ uint16_t depth;
+ uint16_t vsi_id;
+ uint8_t rsv[2];
+};
+
+struct sxe2_drv_txq_cfg_req {
+ uint16_t q_cnt;
+ uint16_t vsi_id;
+ struct sxe2_drv_txq_ctxt cfg[];
+};
+
+struct sxe2_drv_q_switch_req {
+ uint16_t q_idx;
+ uint16_t vsi_id;
+ uint8_t is_enable;
+ uint8_t sched_mode;
+ uint8_t rsv[2];
+};
+
+struct sxe2_drv_vsi_create_req_resp {
+ uint16_t vsi_id;
+ uint16_t vsi_type;
+ struct sxe2_drv_queue_caps used_queues;
+ struct sxe2_drv_msix_caps used_msix;
+};
+
+struct sxe2_drv_vsi_free_req {
+ uint16_t vsi_id;
+ uint8_t rsv[2];
+};
+
+struct sxe2_drv_vsi_info_get_req {
+ uint16_t vsi_id;
+ uint8_t rsv[2];
+};
+
+struct sxe2_drv_vsi_info_get_resp {
+ uint16_t vsi_id;
+ uint16_t vsi_type;
+ struct sxe2_drv_queue_caps used_queues;
+ struct sxe2_drv_msix_caps used_msix;
+};
+
+enum sxe2_drv_cmd_module {
+ SXE2_DRV_CMD_MODULE_HANDSHAKE = 0,
+ SXE2_DRV_CMD_MODULE_DEV = 1,
+ SXE2_DRV_CMD_MODULE_VSI = 2,
+ SXE2_DRV_CMD_MODULE_QUEUE = 3,
+ SXE2_DRV_CMD_MODULE_STATS = 4,
+ SXE2_DRV_CMD_MODULE_SUBSCRIBE = 5,
+ SXE2_DRV_CMD_MODULE_RSS = 6,
+ SXE2_DRV_CMD_MODULE_FLOW = 7,
+ SXE2_DRV_CMD_MODULE_TM = 8,
+ SXE2_DRV_CMD_MODULE_IPSEC = 9,
+ SXE2_DRV_CMD_MODULE_PTP = 10,
+
+ SXE2_DRV_CMD_MODULE_VLAN = 11,
+ SXE2_DRV_CMD_MODULE_RDMA = 12,
+ SXE2_DRV_CMD_MODULE_LINK = 13,
+ SXE2_DRV_CMD_MODULE_MACADDR = 14,
+ SXE2_DRV_CMD_MODULE_PROMISC = 15,
+
+ SXE2_DRV_CMD_MODULE_LED = 16,
+ SXE2_DEV_CMD_MODULE_OPT = 17,
+ SXE2_DEV_CMD_MODULE_SWITCH = 18,
+ SXE2_DRV_CMD_MODULE_ACL = 19,
+ SXE2_DRV_CMD_MODULE_UDPTUNEEL = 20,
+ SXE2_DRV_CMD_MODULE_QUEUE_MAP = 21,
+
+ SXE2_DRV_CMD_MODULE_SCHED = 22,
+
+ SXE2_DRV_CMD_MODULE_IRQ = 23,
+
+ SXE2_DRV_CMD_MODULE_OPT = 24,
+};
+
+enum sxe2_drv_cmd_code {
+ SXE2_DRV_CMD_HANDSHAKE_ENABLE =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_HANDSHAKE, 1),
+ SXE2_DRV_CMD_HANDSHAKE_DISABLE,
+
+ SXE2_DRV_CMD_DEV_GET_CAPS =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_DEV, 1),
+ SXE2_DRV_CMD_DEV_GET_INFO,
+ SXE2_DRV_CMD_DEV_GET_FW_INFO,
+ SXE2_DRV_CMD_DEV_RESET,
+ SXE2_DRV_CMD_DEV_GET_SWITCHDEV_INFO,
+
+ SXE2_DRV_CMD_VSI_CREATE =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VSI, 1),
+ SXE2_DRV_CMD_VSI_FREE,
+ SXE2_DRV_CMD_VSI_INFO_GET,
+ SXE2_DRV_CMD_VSI_SRCVSI_PRUNE,
+ SXE2_DRV_CMD_VSI_FC_GET,
+
+ SXE2_DRV_CMD_RX_MAP_SET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE_MAP, 1),
+ SXE2_DRV_CMD_TX_MAP_SET,
+ SXE2_DRV_CMD_TX_RX_MAP_GET,
+ SXE2_DRV_CMD_TX_RX_MAP_RESET,
+ SXE2_DRV_CMD_TX_RX_MAP_INFO_CLEAR,
+
+ SXE2_DRV_CMD_SCHED_ROOT_TREE_ALLOC =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_SCHED, 1),
+ SXE2_DRV_CMD_SCHED_ROOT_TREE_RELEASE,
+ SXE2_DRV_CMD_SCHED_ROOT_CHILDREN_DELETE,
+ SXE2_DRV_CMD_SCHED_TM_ADD_MID_NODE,
+ SXE2_DRV_CMD_SCHED_TM_ADD_QUEUE_NODE,
+
+ SXE2_DRV_CMD_RXQ_CFG_ENABLE =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_QUEUE, 1),
+ SXE2_DRV_CMD_TXQ_CFG_ENABLE,
+ SXE2_DRV_CMD_RXQ_DISABLE,
+ SXE2_DRV_CMD_TXQ_DISABLE,
+
+ SXE2_DRV_CMD_VSI_STATS_GET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_STATS, 1),
+ SXE2_DRV_CMD_VSI_STATS_CLEAR,
+ SXE2_DRV_CMD_MAC_STATS_GET,
+ SXE2_DRV_CMD_MAC_STATS_CLEAR,
+
+ SXE2_DRV_CMD_RSS_KEY_SET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RSS, 1),
+ SXE2_DRV_CMD_RSS_LUT_SET,
+ SXE2_DRV_CMD_RSS_FUNC_SET,
+ SXE2_DRV_CMD_RSS_HF_ADD,
+ SXE2_DRV_CMD_RSS_HF_DEL,
+ SXE2_DRV_CMD_RSS_HF_CLEAR,
+
+ SXE2_DRV_CMD_FLOW_FILTER_ADD =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_FLOW, 1),
+ SXE2_DRV_CMD_FLOW_FILTER_DEL,
+ SXE2_DRV_CMD_FLOW_FILTER_CLEAR,
+ SXE2_DRV_CMD_FLOW_FNAV_STAT_ALLOC,
+ SXE2_DRV_CMD_FLOW_FNAV_STAT_FREE,
+ SXE2_DRV_CMD_FLOW_FNAV_STAT_QUERY,
+
+ SXE2_DRV_CMD_DEL_TM_ROOT =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_TM, 1),
+ SXE2_DRV_CMD_ADD_TM_ROOT,
+ SXE2_DRV_CMD_ADD_TM_NODE,
+ SXE2_DRV_CMD_ADD_TM_QUEUE,
+
+ SXE2_DRV_CMD_GET_PTP_CLOCK =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PTP, 1),
+
+ SXE2_DRV_CMD_VLAN_FILTER_ADD_DEL =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_VLAN, 1),
+ SXE2_DRV_CMD_VLAN_FILTER_SWITCH,
+ SXE2_DRV_CMD_VLAN_OFFLOAD_CFG,
+ SXE2_DRV_CMD_VLAN_PORTVLAN_CFG,
+ SXE2_DRV_CMD_VLAN_CFG_QUERY,
+
+ SXE2_DRV_CMD_RDMA_DUMP_PCAP =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_RDMA, 1),
+
+ SXE2_DRV_CMD_LINK_STATUS_GET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LINK, 1),
+
+ SXE2_DRV_CMD_MAC_ADDR_UC =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_MACADDR, 1),
+ SXE2_DRV_CMD_MAC_ADDR_MC,
+
+ SXE2_DRV_CMD_PROMISC_CFG =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_PROMISC, 1),
+ SXE2_DRV_CMD_ALLMULTI_CFG,
+
+ SXE2_DRV_CMD_LED_CTRL =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_LED, 1),
+
+ SXE2_DRV_CMD_OPT_EEP =
+ SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_OPT, 1),
+
+ SXE2_DRV_CMD_SWITCH =
+ SXE2_MK_DRV_CMD(SXE2_DEV_CMD_MODULE_SWITCH, 1),
+ SXE2_DRV_CMD_SWITCH_UPLINK,
+ SXE2_DRV_CMD_SWITCH_REPR,
+ SXE2_DRV_CMD_SWITCH_MODE,
+ SXE2_DRV_CMD_SWITCH_CPVSI,
+
+ SXE2_DRV_CMD_UDPTUNNEL_ADD =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_UDPTUNEEL, 1),
+ SXE2_DRV_CMD_UDPTUNNEL_DEL,
+ SXE2_DRV_CMD_UDPTUNNEL_GET,
+
+ SXE2_DRV_CMD_IPSEC_CAP_GET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IPSEC, 1),
+ SXE2_DRV_CMD_IPSEC_TXSA_ADD,
+ SXE2_DRV_CMD_IPSEC_RXSA_ADD,
+ SXE2_DRV_CMD_IPSEC_TXSA_DEL,
+ SXE2_DRV_CMD_IPSEC_RXSA_DEL,
+ SXE2_DRV_CMD_IPSEC_RESOURCE_CLEAR,
+
+ SXE2_DRV_CMD_EVT_IRQ_BAND_RXQ =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_IRQ, 1),
+
+ SXE2_DRV_CMD_OPT_EEP_GET =
+ SXE2_MK_DRV_CMD(SXE2_DRV_CMD_MODULE_OPT, 1),
+
+};
+
+#endif /* __SXE2_DRV_CMD_H__ */
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
new file mode 100644
index 0000000000..f0bdda38a7
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -0,0 +1,613 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_string_fns.h>
+#include <ethdev_pci.h>
+#include <ctype.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <unistd.h>
+#include <rte_tailq.h>
+#include <rte_version.h>
+#include <bus_pci_driver.h>
+#include <dev_driver.h>
+#include <ethdev_driver.h>
+#include <rte_ethdev.h>
+#include <rte_alarm.h>
+#include <rte_dev_info.h>
+#include <rte_pci.h>
+#include <rte_mbuf_dyn.h>
+#include <rte_cycles.h>
+#include <rte_eal_paging.h>
+
+#include "sxe2_ethdev.h"
+#include "sxe2_drv_cmd.h"
+#include "sxe2_cmd_chnl.h"
+#include "sxe2_common.h"
+#include "sxe2_common_log.h"
+#include "sxe2_host_regs.h"
+#include "sxe2_ioctl_chnl_func.h"
+
+#define SXE2_PCI_VENDOR_ID_1 0x1ff2
+#define SXE2_PCI_DEVICE_ID_PF_1 0x10b1
+#define SXE2_PCI_DEVICE_ID_VF_1 0x10b2
+
+#define SXE2_PCI_VENDOR_ID_2 0x1d94
+#define SXE2_PCI_DEVICE_ID_PF_2 0x1260
+#define SXE2_PCI_DEVICE_ID_VF_2 0x126f
+
+#define SXE2_PCI_DEVICE_ID_PF_3 0x10b3
+#define SXE2_PCI_DEVICE_ID_VF_3 0x10b4
+
+#define SXE2_PCI_VENDOR_ID_206F 0x206f
+
+static const struct rte_pci_id pci_id_sxe2_tbl[] = {
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_1)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_1)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_PF_2)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_2, SXE2_PCI_DEVICE_ID_VF_2)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_PF_3)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_1, SXE2_PCI_DEVICE_ID_VF_3)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_PF_1)},
+ { RTE_PCI_DEVICE(SXE2_PCI_VENDOR_ID_206F, SXE2_PCI_DEVICE_ID_VF_1)},
+ { .vendor_id = 0, },
+};
+
+static int32_t sxe2_dev_configure(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ PMD_INIT_FUNC_TRACE();
+
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ return ret;
+}
+
+static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused)
+{
+}
+
+static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused)
+{
+}
+
+static int32_t sxe2_dev_stop(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ PMD_INIT_FUNC_TRACE();
+
+ if (adapter->started == 0)
+ goto l_end;
+
+ sxe2_txqs_all_stop(dev);
+ sxe2_rxqs_all_stop(dev);
+
+ dev->data->dev_started = 0;
+ adapter->started = 0;
+l_end:
+ return ret;
+}
+
+static int32_t __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+static int32_t __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+static int32_t sxe2_queues_start(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ ret = sxe2_txqs_all_start(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to start tx queue.");
+ goto l_end;
+ }
+
+ ret = sxe2_rxqs_all_start(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to start rx queue.");
+ sxe2_txqs_all_stop(dev);
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_dev_start(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ PMD_INIT_FUNC_TRACE();
+
+ ret = sxe2_queues_init(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to init queues.");
+ goto l_end;
+ }
+
+ ret = sxe2_queues_start(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "enable queues failed");
+ goto l_end;
+ }
+
+ dev->data->dev_started = 1;
+ adapter->started = 1;
+ goto l_end;
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_dev_close(struct rte_eth_dev *dev)
+{
+ (void)sxe2_dev_stop(dev);
+
+ sxe2_vsi_uninit(dev);
+
+ return 0;
+}
+
+static int32_t sxe2_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi;
+
+ dev_info->max_rx_queues = vsi->rxqs.q_cnt;
+ dev_info->max_tx_queues = vsi->txqs.q_cnt;
+ dev_info->min_rx_bufsize = SXE2_MIN_BUF_SIZE;
+ dev_info->max_rx_pktlen = SXE2_FRAME_SIZE_MAX;
+ dev_info->max_lro_pkt_size = SXE2_FRAME_SIZE_MAX * SXE2_RX_LRO_DESC_MAX_NUM;
+ dev_info->max_mtu = dev_info->max_rx_pktlen - SXE2_ETH_OVERHEAD;
+ dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+ dev_info->rx_offload_capa =
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO;
+
+ dev_info->tx_offload_capa =
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
+
+ dev_info->rx_queue_offload_capa =
+ RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT |
+ RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_LRO;
+ dev_info->tx_queue_offload_capa =
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
+
+ dev_info->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_thresh = {
+ .pthresh = SXE2_DEFAULT_RX_PTHRESH,
+ .hthresh = SXE2_DEFAULT_RX_HTHRESH,
+ .wthresh = SXE2_DEFAULT_RX_WTHRESH,
+ },
+ .rx_free_thresh = SXE2_DEFAULT_RX_FREE_THRESH,
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ dev_info->default_txconf = (struct rte_eth_txconf) {
+ .tx_thresh = {
+ .pthresh = SXE2_DEFAULT_TX_PTHRESH,
+ .hthresh = SXE2_DEFAULT_TX_HTHRESH,
+ .wthresh = SXE2_DEFAULT_TX_WTHRESH,
+ },
+ .tx_free_thresh = SXE2_DEFAULT_TX_FREE_THRESH,
+ .tx_rs_thresh = SXE2_DEFAULT_TX_RSBIT_THRESH,
+ .offloads = 0,
+ };
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = SXE2_MAX_RING_DESC,
+ .nb_min = SXE2_MIN_RING_DESC,
+ .nb_align = SXE2_ALIGN,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = SXE2_MAX_RING_DESC,
+ .nb_min = SXE2_MIN_RING_DESC,
+ .nb_align = SXE2_ALIGN,
+ .nb_mtu_seg_max = SXE2_TX_MTU_SEG_MAX,
+ .nb_seg_max = SXE2_MAX_RING_DESC,
+ };
+
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+ RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
+
+ dev_info->default_rxportconf.burst_size = SXE2_RX_MAX_BURST;
+ dev_info->default_txportconf.burst_size = SXE2_TX_MAX_BURST;
+ dev_info->default_rxportconf.nb_queues = 1;
+ dev_info->default_txportconf.nb_queues = 1;
+ dev_info->default_rxportconf.ring_size = SXE2_RING_SIZE_MIN;
+ dev_info->default_txportconf.ring_size = SXE2_RING_SIZE_MIN;
+
+ dev_info->rx_seg_capa.max_nseg = SXE2_RX_MAX_NSEG;
+
+ dev_info->rx_seg_capa.multi_pools = true;
+
+ dev_info->rx_seg_capa.offset_allowed = false;
+
+ dev_info->rx_seg_capa.offset_align_log2 = false;
+
+ return 0;
+}
+
+static const struct eth_dev_ops sxe2_eth_dev_ops = {
+ .dev_configure = sxe2_dev_configure,
+ .dev_start = sxe2_dev_start,
+ .dev_stop = sxe2_dev_stop,
+ .dev_close = sxe2_dev_close,
+ .dev_infos_get = sxe2_dev_infos_get,
+};
+
+static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter,
+ struct sxe2_drv_dev_caps_resp *dev_caps)
+{
+ adapter->port_idx = dev_caps->port_idx;
+
+ adapter->cap_flags = 0;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_L2)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_L2;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_VLAN)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_VLAN;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_RSS)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_RSS;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_IPSEC)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_IPSEC;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FNAV)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FNAV;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_TM)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_TM;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_PTP)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_PTP;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_Q_MAP)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_Q_MAP;
+
+ if (dev_caps->cap_flags & SXE2_DEV_CAPS_OFFLOAD_FC_STATE)
+ adapter->cap_flags |= SXE2_DEV_CAPS_OFFLOAD_FC_STATE;
+}
+
+static int32_t sxe2_func_caps_get(struct sxe2_adapter *adapter)
+{
+ int32_t ret = -1;
+ struct sxe2_drv_dev_caps_resp dev_caps = {0};
+
+ ret = sxe2_drv_dev_caps_get(adapter, &dev_caps);
+ if (ret)
+ goto l_end;
+
+ adapter->dev_type = dev_caps.dev_type;
+
+ sxe2_drv_dev_caps_set(adapter, &dev_caps);
+
+ sxe2_sw_queue_ctx_hw_cap_set(adapter, &dev_caps.queue_caps);
+
+ sxe2_sw_vsi_ctx_hw_cap_set(adapter, &dev_caps.vsi_caps);
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_dev_caps_get(struct sxe2_adapter *adapter)
+{
+ int32_t ret = -1;
+
+ ret = sxe2_func_caps_get(adapter);
+ if (ret)
+ PMD_LOG_ERR(INIT, "get function caps failed, ret=%d", ret);
+
+ return ret;
+}
+
+static int32_t sxe2_hw_init(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ int32_t ret = -1;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = sxe2_dev_caps_get(adapter);
+ if (ret)
+ PMD_LOG_ERR(INIT, "Failed to get device caps, ret=[%d]", ret);
+
+ return ret;
+}
+
+static int32_t sxe2_dev_info_init(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter =
+ SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+ struct sxe2_dev_info *dev_info = &adapter->dev_info;
+ struct sxe2_drv_dev_info_resp dev_info_resp = {0};
+ struct sxe2_drv_dev_fw_info_resp dev_fw_info_resp = {0};
+ int32_t ret = 0;
+
+ dev_info->pci.bus_devid = pci_dev->addr.devid;
+ dev_info->pci.bus_function = pci_dev->addr.function;
+
+ ret = sxe2_drv_dev_info_get(adapter, &dev_info_resp);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret);
+ goto l_end;
+ }
+ dev_info->pci.serial_number = dev_info_resp.dsn;
+
+ ret = sxe2_drv_dev_fw_info_get(adapter, &dev_fw_info_resp);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to get device fw info, ret=[%d]", ret);
+ goto l_end;
+ }
+ dev_info->fw.build_id = dev_fw_info_resp.build_id;
+ dev_info->fw.fix_version_id = dev_fw_info_resp.fix_version_id;
+ dev_info->fw.sub_version_id = dev_fw_info_resp.sub_version_id;
+ dev_info->fw.main_version_id = dev_fw_info_resp.main_version_id;
+
+ if (rte_is_valid_assigned_ether_addr((struct rte_ether_addr *)dev_info_resp.mac_addr))
+ rte_ether_addr_copy((struct rte_ether_addr *)dev_info_resp.mac_addr,
+ (struct rte_ether_addr *)dev_info->mac.perm_addr);
+ else
+ rte_eth_random_addr(dev_info->mac.perm_addr);
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_dev_init(struct rte_eth_dev *dev,
+ struct sxe2_dev_kvargs_info *kvargs __rte_unused)
+{
+ int32_t ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ dev->dev_ops = &sxe2_eth_dev_ops;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ goto l_end;
+
+ ret = sxe2_hw_init(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to initialize hw, ret=[%d]", ret);
+ goto l_end;
+ }
+
+ ret = sxe2_vsi_init(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret);
+ goto init_vsi_err;
+ }
+
+ ret = sxe2_dev_info_init(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to get device info, ret=[%d]", ret);
+ goto init_dev_info_err;
+ }
+
+ goto l_end;
+
+init_dev_info_err:
+ sxe2_vsi_uninit(dev);
+init_vsi_err:
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_dev_uninit(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ goto l_end;
+
+ ret = sxe2_dev_close(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Sxe2 dev close failed, ret=%d", ret);
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_eth_pmd_remove(struct sxe2_common_device *cdev)
+{
+ struct rte_eth_dev *eth_dev;
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev);
+ int32_t ret = 0;
+
+ eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+ if (!eth_dev) {
+ PMD_LOG_INFO(INIT, "Sxe2 dev allocated failed");
+ goto l_end;
+ }
+
+ ret = sxe2_dev_uninit(eth_dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Sxe2 dev uninit failed, ret=%d", ret);
+ goto l_end;
+ }
+ (void)rte_eth_dev_release_port(eth_dev);
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_eth_pmd_probe_pf(struct sxe2_common_device *cdev,
+ struct rte_eth_devargs *req_eth_da __rte_unused,
+ uint16_t owner_id __rte_unused,
+ struct sxe2_dev_kvargs_info *kvargs)
+{
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev);
+ struct rte_eth_dev *eth_dev = NULL;
+ struct sxe2_adapter *adapter = NULL;
+ int32_t ret = 0;
+
+ if (!cdev) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ eth_dev = rte_eth_dev_pci_allocate(pci_dev, sizeof(struct sxe2_adapter));
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ if (eth_dev == NULL) {
+ PMD_LOG_ERR(INIT, "Can not allocate ethdev");
+ ret = -ENOMEM;
+ goto l_end;
+ }
+ } else {
+ if (!eth_dev) {
+ PMD_LOG_DEBUG(INIT, "Can not attach secondary ethdev");
+ ret = -EINVAL;
+ goto l_end;
+ }
+ }
+
+ adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(eth_dev);
+ adapter->dev_port_id = eth_dev->data->port_id;
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ adapter->cdev = cdev;
+
+ ret = sxe2_dev_init(eth_dev, kvargs);
+ if (ret != 0) {
+ PMD_DEV_LOG_ERR(adapter, INIT, "Sxe2 dev init failed, ret=%d", ret);
+ goto l_release_port;
+ }
+
+ rte_eth_dev_probing_finish(eth_dev);
+ PMD_DEV_LOG_DEBUG(adapter, INIT, "Sxe2 eth pmd probe successful!");
+ goto l_end;
+
+l_release_port:
+ (void)rte_eth_dev_release_port(eth_dev);
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_parse_eth_devargs(struct rte_device *dev,
+ struct rte_eth_devargs *eth_da)
+{
+ int ret = 0;
+
+ if (dev->devargs == NULL)
+ return 0;
+
+ memset(eth_da, 0, sizeof(*eth_da));
+
+ if (dev->devargs->cls_str) {
+ ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1);
+ if (ret != 0) {
+ PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s",
+ dev->devargs->cls_str);
+ return -rte_errno;
+ }
+ }
+
+ if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) {
+ ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s",
+ dev->devargs->args);
+ return -rte_errno;
+ }
+ }
+
+ return 0;
+}
+
+static int32_t sxe2_eth_pmd_probe(struct sxe2_common_device *cdev,
+ struct sxe2_dev_kvargs_info *kvargs)
+{
+ struct rte_eth_devargs eth_da = { .nb_ports = 0 };
+ int32_t ret = 0;
+
+ ret = sxe2_parse_eth_devargs(cdev->dev, ð_da);
+ if (ret != 0) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ ret = sxe2_eth_pmd_probe_pf(cdev, ð_da, 0, kvargs);
+
+l_end:
+ return ret;
+}
+
+static struct sxe2_class_driver sxe2_eth_pmd = {
+ .drv_class = SXE2_CLASS_TYPE_ETH,
+ .name = "SXE2_ETH_PMD_DRIVER_NAME",
+ .probe = sxe2_eth_pmd_probe,
+ .remove = sxe2_eth_pmd_remove,
+ .id_table = pci_id_sxe2_tbl,
+ .intr_lsc = 1,
+ .intr_rmv = 1,
+};
+
+RTE_INIT(rte_sxe2_pmd_init)
+{
+ sxe2_common_init();
+ sxe2_class_driver_register(&sxe2_eth_pmd);
+}
+
+RTE_PMD_EXPORT_NAME(net_sxe2);
+RTE_PMD_REGISTER_PCI_TABLE(net_sxe2, pci_id_sxe2_tbl);
+RTE_PMD_REGISTER_KMOD_DEP(net_sxe2, "* sxe2");
+
+RTE_LOG_REGISTER_SUFFIX(sxe2_log_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(sxe2_log_driver, driver, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(sxe2_log_rx, rx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(sxe2_log_tx, tx, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(sxe2_log_hw, hw, NOTICE);
diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h
new file mode 100644
index 0000000000..c4634685e6
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_ethdev.h
@@ -0,0 +1,293 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+#ifndef __SXE2_ETHDEV_H__
+#define __SXE2_ETHDEV_H__
+#include <rte_compat.h>
+#include <rte_kvargs.h>
+#include <rte_time.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+#include <rte_tm_driver.h>
+#include <rte_io.h>
+
+#include "sxe2_common.h"
+#include "sxe2_vsi.h"
+#include "sxe2_queue.h"
+#include "sxe2_irq.h"
+#include "sxe2_osal.h"
+
+struct sxe2_link_msg {
+ uint32_t speed;
+ uint8_t status;
+};
+
+enum sxe2_fnav_tunnel_flag_type {
+ SXE2_FNAV_TUN_FLAG_NO_TUNNEL,
+ SXE2_FNAV_TUN_FLAG_TUNNEL,
+ SXE2_FNAV_TUN_FLAG_ANY,
+};
+
+#define SXE2_VF_MAX_NUM 256
+#define SXE2_VSI_MAX_NUM 768
+#define SXE2_FRAME_SIZE_MAX 9832
+#define SXE2_VLAN_TAG_SIZE 4
+#define SXE2_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SXE2_VLAN_TAG_SIZE)
+#define SXE2_ETH_MAX_LEN (RTE_ETHER_MTU + SXE2_ETH_OVERHEAD)
+
+#ifdef SXE2_TEST
+#define SXE2_RESET_ACTIVE_WAIT_COUNT (5)
+#else
+#define SXE2_RESET_ACTIVE_WAIT_COUNT (10000)
+#endif
+#define SXE2_NO_ACTIVE_CNT (10)
+
+#define SXE2_WOKER_DELAY_5MS (5)
+#define SXE2_WOKER_DELAY_10MS (10)
+#define SXE2_WOKER_DELAY_20MS (20)
+#define SXE2_WOKER_DELAY_30MS (30)
+
+#define SXE2_RESET_DETEC_WAIT_COUNT (100)
+#define SXE2_RESET_DONE_WAIT_COUNT (250)
+#define SXE2_RESET_WAIT_MS (10)
+
+#define SXE2_RESET_WAIT_MIN (10)
+#define SXE2_RESET_WAIT_MAX (20)
+#define upper_32_bits(n) ((uint32_t)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((uint32_t)((n) & 0xffffffff))
+
+#define SXE2_I2C_EEPROM_DEV_ADDR 0xA0
+#define SXE2_I2C_EEPROM_DEV_ADDR2 0xA2
+#define SXE2_MODULE_TYPE_SFP 0x03
+#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D
+#define SXE2_MODULE_TYPE_QSFP28 0x11
+#define SXE2_MODULE_SFF_ADDR_MODE 0x04
+#define SXE2_MODULE_SFF_DIAG_CAPAB 0x40
+#define SXE2_MODULE_REVISION_ADDR 0x01
+#define SXE2_MODULE_SFF_8472_COMP 0x5E
+#define SXE2_MODULE_SFF_8472_SWAP 0x5C
+#define SXE2_MODULE_QSFP_MAX_LEN 640
+#define SXE2_MODULE_SFF_8472_UNSUP 0x0
+#define SXE2_MODULE_SFF_DDM_IMPLEMENTED 0x40
+#define SXE2_MODULE_SFF_SFP_TYPE 0x03
+#define SXE2_MODULE_TYPE_QSFP_PLUS 0x0D
+#define SXE2_MODULE_TYPE_QSFP28 0x11
+
+#define SXE2_MODULE_SFF_8079 0x1
+#define SXE2_MODULE_SFF_8079_LEN 256
+#define SXE2_MODULE_SFF_8472 0x2
+#define SXE2_MODULE_SFF_8472_LEN 512
+#define SXE2_MODULE_SFF_8636 0x3
+#define SXE2_MODULE_SFF_8636_LEN 256
+#define SXE2_MODULE_SFF_8636_MAX_LEN 640
+#define SXE2_MODULE_SFF_8436 0x4
+#define SXE2_MODULE_SFF_8436_LEN 256
+#define SXE2_MODULE_SFF_8436_MAX_LEN 640
+
+enum sxe2_wk_type {
+ SXE2_WK_MONITOR,
+ SXE2_WK_MONITOR_IM,
+ SXE2_WK_POST,
+ SXE2_WK_MBX,
+};
+
+enum {
+ SXE2_FLAG_LEGACY_RX_ENABLE = 0,
+ SXE2_FLAG_LRO_ENABLE = 1,
+ SXE2_FLAG_RXQ_DISABLED = 2,
+ SXE2_FLAG_TXQ_DISABLED = 3,
+ SXE2_FLAG_DRV_REMOVING = 4,
+ SXE2_FLAG_RESET_DETECTED = 5,
+ SXE2_FLAG_CORE_RESET_DONE = 6,
+ SXE2_FLAG_RESET_ACTIVED = 7,
+ SXE2_FLAG_RESET_PENDING = 8,
+ SXE2_FLAG_RESET_REQUEST = 9,
+ SXE2_FLAGS_RESET_PROCESS_DONE = 10,
+ SXE2_FLAG_RESET_FAILED = 11,
+ SXE2_FLAG_DRV_PROBE_DONE = 12,
+ SXE2_FLAG_NETDEV_REGISTED = 13,
+ SXE2_FLAG_DRV_UP = 15,
+ SXE2_FLAG_DCB_ENABLE = 16,
+ SXE2_FLAG_FLTR_SYNC = 17,
+
+ SXE2_FLAG_EVENT_IRQ_DISABLED = 18,
+ SXE2_FLAG_SUSPEND = 19,
+ SXE2_FLAG_FNAV_ENABLE = 20,
+
+ SXE2_FLAGS_NBITS
+};
+
+struct sxe2_link_context {
+ rte_spinlock_t link_lock;
+ bool link_up;
+ uint32_t speed;
+};
+
+struct sxe2_devargs {
+ uint8_t flow_dup_pattern_mode;
+ uint8_t func_flow_direct_en;
+ uint8_t fnav_stat_type;
+ uint8_t high_performance_mode;
+ uint8_t sched_layer_mode;
+ uint8_t sw_stats_en;
+ uint8_t rx_low_latency;
+};
+
+#define SXE2_PCI_MAP_BAR_INVALID ((uint8_t)0xff)
+#define SXE2_PCI_MAP_INVALID_VAL ((uint32_t)0xffffffff)
+
+enum sxe2_pci_map_resource {
+ SXE2_PCI_MAP_RES_INVALID = 0,
+ SXE2_PCI_MAP_RES_DOORBELL_TX,
+ SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL,
+ SXE2_PCI_MAP_RES_IRQ_DYN,
+ SXE2_PCI_MAP_RES_IRQ_ITR,
+ SXE2_PCI_MAP_RES_IRQ_MSIX,
+ SXE2_PCI_MAP_RES_PTP,
+ SXE2_PCI_MAP_RES_MAX_COUNT,
+};
+
+enum sxe2_udp_tunnel_protocol {
+ SXE2_UDP_TUNNEL_PROTOCOL_VXLAN = 0,
+ SXE2_UDP_TUNNEL_PROTOCOL_VXLAN_GPE,
+ SXE2_UDP_TUNNEL_PROTOCOL_GENEVE,
+ SXE2_UDP_TUNNEL_PROTOCOL_GTP_C = 4,
+ SXE2_UDP_TUNNEL_PROTOCOL_GTP_U,
+ SXE2_UDP_TUNNEL_PROTOCOL_PFCP,
+ SXE2_UDP_TUNNEL_PROTOCOL_ECPRI,
+ SXE2_UDP_TUNNEL_PROTOCOL_MPLS,
+ SXE2_UDP_TUNNEL_PROTOCOL_NVGRE = 10,
+ SXE2_UDP_TUNNEL_PROTOCOL_L2TP,
+ SXE2_UDP_TUNNEL_PROTOCOL_TEREDO,
+ SXE2_UDP_TUNNEL_MAX,
+};
+
+struct sxe2_pci_map_addr_info {
+ uint64_t addr_base;
+ uint8_t bar_idx;
+ uint8_t reg_width;
+};
+
+struct sxe2_pci_map_segment_info {
+ enum sxe2_pci_map_resource type;
+ void *addr;
+ uint64_t page_inner_offset;
+ uint64_t len;
+};
+
+struct sxe2_pci_map_bar_info {
+ uint8_t bar_idx;
+ uint8_t map_cnt;
+ struct sxe2_pci_map_segment_info *seg_info;
+};
+
+struct sxe2_pci_map_context {
+ uint8_t bar_cnt;
+ struct sxe2_pci_map_bar_info *bar_info;
+ struct sxe2_pci_map_addr_info *addr_info;
+};
+
+struct sxe2_dev_mac_info {
+ uint8_t perm_addr[SXE2_ETH_ALEN];
+};
+
+struct sxe2_pci_info {
+ uint64_t serial_number;
+ uint8_t bus_devid;
+ uint8_t bus_function;
+ uint16_t max_vfs;
+};
+
+struct sxe2_fw_info {
+ uint8_t main_version_id;
+ uint8_t sub_version_id;
+ uint8_t fix_version_id;
+ uint8_t build_id;
+};
+
+struct sxe2_dev_info {
+ struct rte_eth_dev_data *dev_data;
+ struct sxe2_pci_info pci;
+ struct sxe2_fw_info fw;
+ struct sxe2_dev_mac_info mac;
+};
+
+enum sxe2_udp_tunnel_status {
+ SXE2_UDP_TUNNEL_DISABLE = 0x0,
+ SXE2_UDP_TUNNEL_ENABLE,
+};
+
+struct sxe2_udp_tunnel_cfg {
+ uint8_t protocol;
+ uint8_t dev_status;
+ uint16_t dev_port;
+ uint16_t dev_ref_cnt;
+
+ uint16_t fw_port;
+ uint8_t fw_status;
+ uint8_t fw_dst_en;
+ uint8_t fw_src_en;
+ uint8_t fw_used;
+};
+
+struct sxe2_udp_tunnel_ctx {
+ struct sxe2_udp_tunnel_cfg tunnel_conf[SXE2_UDP_TUNNEL_MAX];
+ rte_spinlock_t lock;
+};
+
+struct sxe2_repr_context {
+ uint16_t nb_vf;
+ uint16_t nb_repr_vf;
+ struct rte_eth_dev **vf_rep_eth_dev;
+ struct sxe2_drv_vsi_caps repr_vf_id[SXE2_VF_MAX_NUM];
+};
+
+struct sxe2_repr_private_data {
+ struct rte_eth_dev *rep_eth_dev;
+ struct sxe2_adapter *parent_adapter;
+
+ struct sxe2_vsi *cp_vsi;
+ uint16_t repr_q_id;
+
+ uint16_t repr_id;
+ uint16_t repr_pf_id;
+ uint16_t repr_vf_id;
+ uint16_t repr_vf_vsi_id;
+ uint16_t repr_vf_k_vsi_id;
+ uint16_t repr_vf_u_vsi_id;
+};
+
+struct sxe2_sched_hw_cap {
+ uint32_t tm_layers;
+ uint8_t root_max_children;
+ uint8_t prio_max;
+ uint8_t adj_lvl;
+};
+
+struct sxe2_adapter {
+ struct sxe2_common_device *cdev;
+ struct sxe2_dev_info dev_info;
+ struct rte_pci_device *pci_dev;
+ struct sxe2_repr_private_data *repr_priv_data;
+ struct sxe2_pci_map_context map_ctxt;
+ struct sxe2_irq_context irq_ctxt;
+ struct sxe2_queue_context q_ctxt;
+ struct sxe2_vsi_context vsi_ctxt;
+ struct sxe2_devargs devargs;
+ uint16_t dev_port_id;
+ uint64_t cap_flags;
+ enum sxe2_dev_type dev_type;
+ uint32_t ptype_tbl[SXE2_MAX_PTYPE_NUM];
+ struct rte_ether_addr mac_addr;
+ uint8_t port_idx;
+ uint8_t pf_idx;
+ uint32_t tx_mode_flags;
+ uint32_t rx_mode_flags;
+ uint8_t started;
+};
+
+#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \
+ ((struct sxe2_adapter *)(dev)->data->dev_private)
+
+#endif /* __SXE2_ETHDEV_H__ */
diff --git a/drivers/net/sxe2/sxe2_irq.h b/drivers/net/sxe2/sxe2_irq.h
new file mode 100644
index 0000000000..bb96c6d842
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_irq.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_IRQ_H__
+#define __SXE2_IRQ_H__
+
+#include <ethdev_driver.h>
+
+#include "sxe2_drv_cmd.h"
+
+#define SXE2_IRQ_MAX_CNT 2048
+
+#define SXE2_LAN_MSIX_MIN_CNT 1
+
+#define SXE2_EVENT_IRQ_IDX 0
+
+#define SXE2_MAX_INTR_QUEUE_NUM 256
+
+#define SXE2_IRQ_NAME_MAX_LEN (IFNAMSIZ + 16)
+
+#define SXE2_ITR_1000K 1
+#define SXE2_ITR_500K 2
+#define SXE2_ITR_50K 20
+
+#define SXE2_ITR_INTERVAL_NORMAL (SXE2_ITR_50K)
+#define SXE2_ITR_INTERVAL_LOW (SXE2_ITR_1000K)
+
+struct sxe2_fwc_msix_caps;
+struct sxe2_adapter;
+
+struct sxe2_irq_context {
+ struct rte_intr_handle *reset_handle;
+ int32_t reset_event_fd;
+ int32_t other_event_fd;
+
+ uint16_t max_cnt_hw;
+ uint16_t base_idx_in_func;
+
+ uint16_t rxq_avail_cnt;
+ uint16_t rxq_base_idx_in_pf;
+
+ uint16_t rxq_irq_cnt;
+ uint32_t *rxq_msix_idx;
+ int32_t *rxq_event_fd;
+};
+
+#endif /* __SXE2_IRQ_H__ */
diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c
new file mode 100644
index 0000000000..93f8236381
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_queue.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include "sxe2_ethdev.h"
+#include "sxe2_queue.h"
+#include "sxe2_common_log.h"
+
+void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter,
+ struct sxe2_drv_queue_caps *q_caps)
+{
+ adapter->q_ctxt.qp_cnt_assign = q_caps->queues_cnt;
+ adapter->q_ctxt.base_idx_in_pf = q_caps->base_idx_in_pf;
+}
+
+int32_t sxe2_queues_init(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+ uint16_t buf_size;
+ uint16_t frame_size;
+ struct sxe2_rx_queue *rxq;
+ uint16_t nb_rxq;
+
+ frame_size = dev->data->mtu + SXE2_ETH_OVERHEAD;
+ for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) {
+ rxq = dev->data->rx_queues[nb_rxq];
+ if (!rxq)
+ continue;
+
+ buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM;
+ rxq->rx_buf_len = RTE_ALIGN_FLOOR(buf_size, (1 << SXE2_RXQ_CTX_DBUFF_SHIFT));
+ rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, SXE2_RX_MAX_DATA_BUF_SIZE);
+ if (frame_size > rxq->rx_buf_len)
+ dev->data->scattered_rx = 1;
+ }
+
+ return ret;
+}
diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h
new file mode 100644
index 0000000000..e587e582fa
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_queue.h
@@ -0,0 +1,191 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_QUEUE_H__
+#define __SXE2_QUEUE_H__
+#include <rte_ethdev.h>
+#include <rte_io.h>
+#include <rte_stdatomic.h>
+#include <ethdev_driver.h>
+
+#include "sxe2_drv_cmd.h"
+#include "sxe2_txrx_common.h"
+
+#define SXE2_PCI_REG_READ(reg) \
+ rte_read32(reg)
+#define SXE2_PCI_REG_WRITE_WC(reg, value) \
+ rte_write32_wc((rte_cpu_to_le_32(value)), reg)
+#define SXE2_PCI_REG_WRITE_WC_RELAXED(reg, value) \
+ rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg)
+
+struct sxe2_queue_context {
+ uint16_t qp_cnt_assign;
+ uint16_t base_idx_in_pf;
+
+ uint32_t tx_mode_flags;
+ uint32_t rx_mode_flags;
+};
+
+struct sxe2_tx_buffer {
+ struct rte_mbuf *mbuf;
+
+ uint16_t next_id;
+ uint16_t last_id;
+};
+
+struct sxe2_tx_buffer_vec {
+ struct rte_mbuf *mbuf;
+};
+
+struct sxe2_txq_stats {
+ uint64_t tx_restart;
+ uint64_t tx_busy;
+
+ uint64_t tx_linearize;
+ uint64_t tx_tso_linearize_chk;
+ uint64_t tx_vlan_insert;
+ uint64_t tx_tso_packets;
+ uint64_t tx_tso_bytes;
+ uint64_t tx_csum_none;
+ uint64_t tx_csum_partial;
+ uint64_t tx_csum_partial_inner;
+ uint64_t tx_queue_dropped;
+ uint64_t tx_xmit_more;
+ uint64_t tx_pkts_num;
+ uint64_t tx_desc_not_done;
+};
+
+struct sxe2_tx_queue;
+struct sxe2_txq_ops {
+ void (*queue_reset)(struct sxe2_tx_queue *txq);
+ void (*mbufs_release)(struct sxe2_tx_queue *txq);
+ void (*buffer_ring_free)(struct sxe2_tx_queue *txq);
+};
+struct sxe2_tx_queue {
+ volatile union sxe2_tx_data_desc *desc_ring;
+ struct sxe2_tx_buffer *buffer_ring;
+ volatile uint32_t *tdt_reg_addr;
+
+ uint64_t offloads;
+ uint16_t ring_depth;
+ uint16_t desc_free_num;
+
+ uint16_t free_thresh;
+
+ uint16_t rs_thresh;
+ uint16_t next_use;
+ uint16_t next_clean;
+
+ uint16_t desc_used_num;
+ uint16_t next_dd;
+ uint16_t next_rs;
+ uint16_t ipsec_pkt_md_offset;
+
+ uint16_t port_id;
+ uint16_t queue_id;
+ uint16_t idx_in_func;
+ bool tx_deferred_start;
+ uint8_t pthresh;
+ uint8_t hthresh;
+ uint8_t wthresh;
+ uint16_t reg_idx;
+ uint64_t base_addr;
+ struct sxe2_vsi *vsi;
+ const struct rte_memzone *mz;
+ struct sxe2_txq_ops ops;
+ uint8_t vlan_flag;
+ uint8_t use_ctx:1,
+ res:7;
+};
+struct sxe2_rx_queue;
+struct sxe2_rxq_ops {
+ void (*queue_reset)(struct sxe2_rx_queue *rxq);
+ void (*mbufs_release)(struct sxe2_rx_queue *txq);
+};
+struct sxe2_rxq_stats {
+ uint64_t rx_pkts_num;
+ uint64_t rx_rss_pkt_num;
+ uint64_t rx_fnav_pkt_num;
+ uint64_t rx_ptp_pkt_num;
+ uint32_t rx_vec_align_drop;
+
+ uint32_t rxdid_1588_err;
+ uint32_t ip_csum_err;
+ uint32_t l4_csum_err;
+ uint32_t outer_ip_csum_err;
+ uint32_t outer_l4_csum_err;
+ uint32_t macsec_err;
+ uint32_t ipsec_err;
+
+ uint64_t ptype_pkts[SXE2_MAX_PTYPE_NUM];
+};
+
+struct sxe2_rxq_sw_stats {
+ RTE_ATOMIC(uint64_t)pkts;
+ RTE_ATOMIC(uint64_t)bytes;
+ RTE_ATOMIC(uint64_t)drop_pkts;
+ RTE_ATOMIC(uint64_t)drop_bytes;
+ RTE_ATOMIC(uint64_t)unicast_pkts;
+ RTE_ATOMIC(uint64_t)multicast_pkts;
+ RTE_ATOMIC(uint64_t)broadcast_pkts;
+};
+
+struct sxe2_rx_queue {
+ volatile union sxe2_rx_desc *desc_ring;
+ volatile uint32_t *rdt_reg_addr;
+ struct rte_mempool *mb_pool;
+ struct rte_mbuf **buffer_ring;
+ struct sxe2_vsi *vsi;
+
+ uint64_t offloads;
+ uint16_t ring_depth;
+ uint16_t rx_free_thresh;
+ uint16_t processing_idx;
+ uint16_t hold_num;
+ uint16_t next_ret_pkt;
+ uint16_t batch_alloc_trigger;
+ uint16_t completed_pkts_num;
+ uint64_t update_time;
+ uint32_t desc_ts;
+ uint64_t ts_high;
+ uint32_t ts_low;
+ uint32_t ts_need_update;
+ uint8_t crc_len;
+ bool fnav_enable;
+
+ struct rte_eth_rxseg_split rx_seg[SXE2_RX_SEG_NUM];
+
+ struct rte_mbuf *completed_buf[SXE2_RX_PKTS_BURST_BATCH_NUM * 2];
+ struct rte_mbuf *pkt_first_seg;
+ struct rte_mbuf *pkt_last_seg;
+ uint64_t mbuf_init_value;
+ uint16_t realloc_num;
+ uint16_t realloc_start;
+ struct rte_mbuf fake_mbuf;
+
+ const struct rte_memzone *mz;
+ struct sxe2_rxq_ops ops;
+ rte_iova_t base_addr;
+ uint16_t reg_idx;
+ uint32_t low_desc_waterline : 16;
+ uint32_t ldw_event_pending : 1;
+ struct sxe2_rxq_sw_stats sw_stats;
+ uint16_t port_id;
+ uint16_t queue_id;
+ uint16_t idx_in_func;
+ uint16_t rx_buf_len;
+ uint16_t rx_hdr_len;
+ uint16_t max_pkt_len;
+ bool rx_deferred_start;
+ uint8_t drop_en;
+};
+
+struct sxe2_adapter;
+
+void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter,
+ struct sxe2_drv_queue_caps *q_caps);
+
+int32_t sxe2_queues_init(struct rte_eth_dev *dev);
+
+#endif /* __SXE2_QUEUE_H__ */
diff --git a/drivers/net/sxe2/sxe2_txrx_common.h b/drivers/net/sxe2/sxe2_txrx_common.h
new file mode 100644
index 0000000000..63f56e4964
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_common.h
@@ -0,0 +1,540 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef _SXE2_TXRX_COMMON_H_
+#define _SXE2_TXRX_COMMON_H_
+#include <stdbool.h>
+
+#define SXE2_ALIGN_RING_DESC 32
+#define SXE2_MIN_RING_DESC 64
+#define SXE2_MAX_RING_DESC 4096
+
+#define SXE2_VECTOR_PATH 0
+#define SXE2_VECTOR_OFFLOAD_PATH 1
+#define SXE2_VECTOR_CTX_OFFLOAD_PATH 2
+
+#define SXE2_MAX_PTYPE_NUM 1024
+#define SXE2_MIN_BUF_SIZE 1024
+
+#define SXE2_ALIGN 32
+#define SXE2_DESC_ADDR_ALIGN 128
+
+#define SXE2_MIN_TSO_MSS 88
+#define SXE2_MAX_TSO_MSS 9728
+
+#define SXE2_TX_MTU_SEG_MAX 15
+
+#define SXE2_TX_MIN_PKT_LEN 17
+#define SXE2_TX_MAX_BURST 32
+#define SXE2_TX_MAX_FREE_BUF 64
+#define SXE2_TX_TSO_PKTLEN_MAX (256ULL * 1024)
+
+#define DEFAULT_TX_RS_THRESH 32
+#define DEFAULT_TX_FREE_THRESH 32
+
+#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 RTE_BIT32(0)
+#define SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 RTE_BIT32(1)
+
+#define SXE2_TX_PKTS_BURST_BATCH_NUM 32
+
+union sxe2_tx_offload_info {
+ uint64_t data;
+ struct {
+ uint64_t l2_len:7;
+ uint64_t l3_len:9;
+ uint64_t l4_len:8;
+ uint64_t tso_segsz:16;
+ uint64_t outer_l2_len:8;
+ uint64_t outer_l3_len:16;
+ };
+};
+
+#define SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK (RTE_MBUF_F_TX_TCP_SEG | \
+ RTE_MBUF_F_TX_UDP_SEG | \
+ RTE_MBUF_F_TX_QINQ | \
+ RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+ RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \
+ RTE_MBUF_F_TX_SEC_OFFLOAD | \
+ RTE_MBUF_F_TX_IEEE1588_TMST)
+
+#define SXE2_TX_OFFLOAD_CKSUM_MASK (RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_L4_MASK | \
+ RTE_MBUF_F_TX_TCP_SEG | \
+ RTE_MBUF_F_TX_UDP_SEG | \
+ RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \
+ RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+
+struct sxe2_tx_context_desc {
+ uint32_t tunneling_params;
+ uint16_t l2tag2;
+ uint16_t ipsec_offset;
+ uint64_t type_cmd_tso_mss;
+};
+
+#define SXE2_TX_CTXT_DESC_EIPLEN_SHIFT 2
+#define SXE2_TX_CTXT_DESC_L4TUNT_SHIFT 9
+#define SXE2_TX_CTXT_DESC_NATLEN_SHIFT 12
+#define SXE2_TX_CTXT_DESC_L4T_CS_SHIFT 23
+
+#define SXE2_TX_CTXT_DESC_CMD_SHIFT 4
+#define SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT 11
+#define SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT 12
+#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT 13
+#define SXE2_TX_CTXT_DESC_IPSEC_SA_SHIFT 16
+#define SXE2_TX_CTXT_DESC_TSO_LEN_SHIFT 30
+#define SXE2_TX_CTXT_DESC_MSS_SHIFT 50
+#define SXE2_TX_CTXT_DESC_VSI_SHIFT 50
+
+#define SXE2_TX_CTXT_DESC_L4T_CS_MASK RTE_BIT64(SXE2_TX_CTXT_DESC_L4T_CS_SHIFT)
+
+#define SXE2_TX_CTXT_DESC_EIPLEN_VAL(val) \
+ (((val) >> 2) << SXE2_TX_CTXT_DESC_EIPLEN_SHIFT)
+#define SXE2_TX_CTXT_DESC_NATLEN_VAL(val) \
+ (((val) >> 1) << SXE2_TX_CTXT_DESC_NATLEN_SHIFT)
+
+enum sxe2_tx_ctxt_desc_eipt_bits {
+ SXE2_TX_CTXT_DESC_EIPT_NONE = 0x0,
+ SXE2_TX_CTXT_DESC_EIPT_IPV6 = 0x1,
+ SXE2_TX_CTXT_DESC_EIPT_IPV4_NO_CSUM = 0x2,
+ SXE2_TX_CTXT_DESC_EIPT_IPV4 = 0x3,
+};
+
+enum sxe2_tx_ctxt_desc_l4tunt_bits {
+ SXE2_TX_CTXT_DESC_UDP_TUNNE = 0x1 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT,
+ SXE2_TX_CTXT_DESC_GRE_TUNNE = 0x2 << SXE2_TX_CTXT_DESC_L4TUNT_SHIFT,
+};
+
+enum sxe2_tx_ctxt_desc_cmd_bits {
+ SXE2_TX_CTXT_DESC_CMD_TSO = 0x01,
+ SXE2_TX_CTXT_DESC_CMD_TSYN = 0x02,
+ SXE2_TX_CTXT_DESC_CMD_IL2TAG2 = 0x04,
+ SXE2_TX_CTXT_DESC_CMD_IL2TAG2_IL2H = 0x08,
+ SXE2_TX_CTXT_DESC_CMD_SWTCH_NOTAG = 0x00,
+ SXE2_TX_CTXT_DESC_CMD_SWTCH_UPLINK = 0x10,
+ SXE2_TX_CTXT_DESC_CMD_SWTCH_LOCAL = 0x20,
+ SXE2_TX_CTXT_DESC_CMD_SWTCH_VSI = 0x30,
+ SXE2_TX_CTXT_DESC_CMD_RESERVED = 0x40
+};
+#define SXE2_TX_CTXT_DESC_IPSEC_MODE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_MODE_SHIFT)
+#define SXE2_TX_CTXT_DESC_IPSEC_EN RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_EN_SHIFT)
+#define SXE2_TX_CTXT_DESC_IPSEC_ENGINE RTE_BIT64(SXE2_TX_CTXT_DESC_IPSEC_ENGINE_SHIFT)
+#define SXE2_TX_CTXT_DESC_CMD_TSYN_MASK \
+ (((uint64_t)SXE2_TX_CTXT_DESC_CMD_TSYN) << SXE2_TX_CTXT_DESC_CMD_SHIFT)
+#define SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK \
+ (((uint64_t)SXE2_TX_CTXT_DESC_CMD_IL2TAG2) << SXE2_TX_CTXT_DESC_CMD_SHIFT)
+
+union sxe2_tx_data_desc {
+ struct {
+ uint64_t buf_addr;
+ uint64_t type_cmd_off_bsz_l2t;
+ } read;
+ struct {
+ uint64_t rsvd;
+ uint64_t dd;
+ } wb;
+};
+
+#define SXE2_TX_DATA_DESC_CMD_SHIFT 4
+#define SXE2_TX_DATA_DESC_OFFSET_SHIFT 16
+#define SXE2_TX_DATA_DESC_BUF_SZ_SHIFT 34
+#define SXE2_TX_DATA_DESC_L2TAG1_SHIFT 48
+
+#define SXE2_TX_DATA_DESC_CMD_MASK \
+ (0xFFFULL << SXE2_TX_DATA_DESC_CMD_SHIFT)
+#define SXE2_TX_DATA_DESC_OFFSET_MASK \
+ (0x3FFFFULL << SXE2_TX_DATA_DESC_OFFSET_SHIFT)
+#define SXE2_TX_DATA_DESC_BUF_SZ_MASK \
+ (0x3FFFULL << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT)
+#define SXE2_TX_DATA_DESC_L2TAG1_MASK \
+ (0xFFFFULL << SXE2_TX_DATA_DESC_L2TAG1_SHIFT)
+
+#define SXE2_TX_DESC_LENGTH_MACLEN_SHIFT (0)
+#define SXE2_TX_DESC_LENGTH_IPLEN_SHIFT (7)
+#define SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT (14)
+
+#define SXE2_TX_DESC_DTYPE_MASK 0xF
+#define SXE2_TX_DATA_DESC_MACLEN_MASK \
+ (0x7FULL << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define SXE2_TX_DATA_DESC_IPLEN_MASK \
+ (0x7FULL << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define SXE2_TX_DATA_DESC_L4LEN_MASK \
+ (0xFULL << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+#define SXE2_TX_DATA_DESC_MACLEN_VAL(val) \
+ (((val) >> 1) << SXE2_TX_DESC_LENGTH_MACLEN_SHIFT)
+#define SXE2_TX_DATA_DESC_IPLEN_VAL(val) \
+ (((val) >> 2) << SXE2_TX_DESC_LENGTH_IPLEN_SHIFT)
+#define SXE2_TX_DATA_DESC_L4LEN_VAL(val) \
+ (((val) >> 2) << SXE2_TX_DESC_LENGTH_L4_FC_LEN_SHIFT)
+
+enum sxe2_tx_desc_type {
+ SXE2_TX_DESC_DTYPE_DATA = 0x0,
+ SXE2_TX_DESC_DTYPE_CTXT = 0x1,
+ SXE2_TX_DESC_DTYPE_FLTR_PROG = 0x8,
+ SXE2_TX_DESC_DTYPE_DESC_DONE = 0xF,
+};
+
+enum sxe2_tx_data_desc_cmd_bits {
+ SXE2_TX_DATA_DESC_CMD_EOP = 0x0001,
+ SXE2_TX_DATA_DESC_CMD_RS = 0x0002,
+ SXE2_TX_DATA_DESC_CMD_MACSEC = 0x0004,
+ SXE2_TX_DATA_DESC_CMD_IL2TAG1 = 0x0008,
+ SXE2_TX_DATA_DESC_CMD_DUMMY = 0x0010,
+ SXE2_TX_DATA_DESC_CMD_IIPT_IPV6 = 0x0020,
+ SXE2_TX_DATA_DESC_CMD_IIPT_IPV4 = 0x0040,
+ SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM = 0x0060,
+ SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP = 0x0100,
+ SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP = 0x0200,
+ SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP = 0x0300,
+ SXE2_TX_DATA_DESC_CMD_RE = 0x0400
+};
+#define SXE2_TX_DATA_DESC_CMD_RS_MASK \
+ (((uint64_t)SXE2_TX_DATA_DESC_CMD_RS) << SXE2_TX_DATA_DESC_CMD_SHIFT)
+
+#define SXE2_TX_MAX_DATA_NUM_PER_DESC 0X3FFFUL
+
+#define SXE2_TX_DESC_RING_ALIGN \
+ (SXE2_ALIGN_RING_DESC / sizeof(union sxe2_tx_data_desc))
+
+#define SXE2_TX_DESC_DTYPE_DESC_MASK 0xF
+
+#define SXE2_TX_FILL_PER_LOOP 4
+#define SXE2_TX_FILL_PER_LOOP_MASK (SXE2_TX_FILL_PER_LOOP - 1)
+#define SXE2_TX_FREE_BUFFER_SIZE_MAX (64)
+
+#define SXE2_RX_MAX_BURST 32
+#define SXE2_RING_SIZE_MIN 1024
+#define SXE2_RX_MAX_NSEG 2
+
+#define SXE2_RX_PKTS_BURST_BATCH_NUM SXE2_RX_MAX_BURST
+#define SXE2_VPMD_RX_MAX_BURST SXE2_RX_MAX_BURST
+
+#define SXE2_RXQ_CTX_DBUFF_SHIFT 7
+
+#define SXE2_RX_NUM_PER_LOOP 8
+
+#define SXE2_RX_FLEX_DESC_PTYPE_S (16)
+#define SXE2_RX_FLEX_DESC_PTYPE_M (0x3FFULL)
+
+#define SXE2_RX_HBUF_LEN_UNIT 6
+#define SXE2_RX_LDW_LEN_UNIT 6
+#define SXE2_RX_DBUF_LEN_UNIT 7
+#define SXE2_RX_DBUF_LEN_MASK (~0x7F)
+
+#define SXE2_RX_PKTS_TS_TIMEOUT_VAL 200
+
+#define SXE2_RX_VECTOR_OFFLOAD ( \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH | \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+
+#define SXE2_DEFAULT_RX_FREE_THRESH 32
+#define SXE2_DEFAULT_RX_PTHRESH 8
+#define SXE2_DEFAULT_RX_HTHRESH 8
+#define SXE2_DEFAULT_RX_WTHRESH 0
+
+#define SXE2_DEFAULT_TX_FREE_THRESH 32
+#define SXE2_DEFAULT_TX_PTHRESH 32
+#define SXE2_DEFAULT_TX_HTHRESH 0
+#define SXE2_DEFAULT_TX_WTHRESH 0
+#define SXE2_DEFAULT_TX_RSBIT_THRESH 32
+
+#define SXE2_RX_SEG_NUM 2
+
+#ifdef RTE_LIBRTE_SXE2_16BYTE_RX_DESC
+#define sxe2_rx_desc sxe2_rx_16b_desc
+#else
+#define sxe2_rx_desc sxe2_rx_32b_desc
+#endif
+
+union sxe2_rx_16b_desc {
+ struct {
+ uint64_t pkt_addr;
+ uint64_t hdr_addr;
+ } read;
+ struct {
+ uint8_t rxdid_src;
+ uint8_t mirror;
+ uint16_t l2tag1;
+ uint32_t filter_status;
+
+ uint64_t status_err_ptype_len;
+ } wb;
+};
+
+union sxe2_rx_32b_desc {
+ struct {
+ uint64_t pkt_addr;
+ uint64_t hdr_addr;
+ uint64_t rsvd1;
+ uint64_t rsvd2;
+ } read;
+ struct {
+ uint8_t rxdid_src;
+ uint8_t mirror;
+ uint16_t l2tag1;
+ uint32_t filter_status;
+
+ uint64_t status_err_ptype_len;
+
+ uint32_t status_lrocnt_fdpf_id;
+ uint16_t l2tag2_1st;
+ uint16_t l2tag2_2nd;
+
+ uint8_t acl_pf_id;
+ uint8_t sw_pf_id;
+ uint16_t flow_id;
+
+ uint32_t fd_filter_id;
+
+ } wb;
+ struct {
+ uint8_t rxdid_src_fd_eudpe;
+ uint8_t mirror;
+ uint16_t l2_tag1;
+ uint32_t filter_status;
+
+ uint64_t status_err_ptype_len;
+
+ uint32_t ext_status_ts_low;
+ uint16_t l2tag2_1st;
+ uint16_t l2tag2_2nd;
+
+ uint32_t ts_h;
+ uint32_t fd_filter_id;
+
+ } wb_ts;
+};
+
+enum sxe2_rx_lro_desc_max_num {
+ SXE2_RX_LRO_DESC_MAX_1 = 1,
+ SXE2_RX_LRO_DESC_MAX_4 = 4,
+ SXE2_RX_LRO_DESC_MAX_8 = 8,
+ SXE2_RX_LRO_DESC_MAX_16 = 16,
+ SXE2_RX_LRO_DESC_MAX_32 = 32,
+ SXE2_RX_LRO_DESC_MAX_48 = 48,
+ SXE2_RX_LRO_DESC_MAX_64 = 64,
+ SXE2_RX_LRO_DESC_MAX_NUM = SXE2_RX_LRO_DESC_MAX_64,
+};
+
+enum sxe2_rx_desc_rxdid {
+ SXE2_RX_DESC_RXDID_16B = 0,
+ SXE2_RX_DESC_RXDID_32B,
+ SXE2_RX_DESC_RXDID_1588,
+ SXE2_RX_DESC_RXDID_FD,
+};
+
+#define SXE2_RX_DESC_RXDID_SHIFT (0)
+#define SXE2_RX_DESC_RXDID_MASK (0x7 << SXE2_RX_DESC_RXDID_SHIFT)
+#define SXE2_RX_DESC_RXDID_VAL_GET(rxdid_src) \
+ (((rxdid_src) & SXE2_RX_DESC_RXDID_MASK) >> SXE2_RX_DESC_RXDID_SHIFT)
+
+#define SXE2_RX_DESC_PKT_SRC_SHIFT (3)
+#define SXE2_RX_DESC_PKT_SRC_MASK (0x3 << SXE2_RX_DESC_PKT_SRC_SHIFT)
+#define SXE2_RX_DESC_PKT_SRC_VAL_GET(rxdid_src) \
+ (((rxdid_src) & SXE2_RX_DESC_PKT_SRC_MASK) >> SXE2_RX_DESC_PKT_SRC_SHIFT)
+
+#define SXE2_RX_DESC_FD_VLD_SHIFT (5)
+#define SXE2_RX_DESC_FD_VLD_MASK (0x1 << SXE2_RX_DESC_FD_VLD_SHIFT)
+#define SXE2_RX_DESC_FD_VLD_VAL_GET(rxdid_src) \
+ (((rxdid_src) & SXE2_RX_DESC_FD_VLD_MASK) >> SXE2_RX_DESC_FD_VLD_SHIFT)
+
+#define SXE2_RX_DESC_EUDPE_SHIFT (6)
+#define SXE2_RX_DESC_EUDPE_MASK (0x1 << SXE2_RX_DESC_EUDPE_SHIFT)
+#define SXE2_RX_DESC_EUDPE_VAL_GET(rxdid_src) \
+ (((rxdid_src) & SXE2_RX_DESC_EUDPE_MASK) >> SXE2_RX_DESC_EUDPE_SHIFT)
+
+#define SXE2_RX_DESC_UDP_NET_SHIFT (7)
+#define SXE2_RX_DESC_UDP_NET_MASK (0x1 << SXE2_RX_DESC_UDP_NET_SHIFT)
+#define SXE2_RX_DESC_UDP_NET_VAL_GET(rxdid_src) \
+ (((rxdid_src) & SXE2_RX_DESC_UDP_NET_MASK) >> SXE2_RX_DESC_UDP_NET_SHIFT)
+
+#define SXE2_RX_DESC_MIRR_ID_SHIFT (0)
+#define SXE2_RX_DESC_MIRR_ID_MASK (0x3F << SXE2_RX_DESC_MIRR_ID_SHIFT)
+#define SXE2_RX_DESC_MIRR_ID_VAL_GET(mirr) \
+ (((mirr) & SXE2_RX_DESC_MIRR_ID_MASK) >> SXE2_RX_DESC_MIRR_ID_SHIFT)
+
+#define SXE2_RX_DESC_MIRR_TYPE_SHIFT (6)
+#define SXE2_RX_DESC_MIRR_TYPE_MASK (0x3 << SXE2_RX_DESC_MIRR_TYPE_SHIFT)
+#define SXE2_RX_DESC_MIRR_TYPE_VAL_GET(mirr) \
+ (((mirr) & SXE2_RX_DESC_MIRR_TYPE_MASK) >> SXE2_RX_DESC_MIRR_TYPE_SHIFT)
+
+#define SXE2_RX_DESC_PKT_LEN_SHIFT (32)
+#define SXE2_RX_DESC_PKT_LEN_MASK (0x3FFFULL << SXE2_RX_DESC_PKT_LEN_SHIFT)
+#define SXE2_RX_DESC_PKT_LEN_VAL_GET(qw1) \
+ (((qw1) & SXE2_RX_DESC_PKT_LEN_MASK) >> SXE2_RX_DESC_PKT_LEN_SHIFT)
+
+#define SXE2_RX_DESC_HDR_LEN_SHIFT (46)
+#define SXE2_RX_DESC_HDR_LEN_MASK (0x7FFULL << SXE2_RX_DESC_HDR_LEN_SHIFT)
+#define SXE2_RX_DESC_HDR_LEN_VAL_GET(qw1) \
+ (((qw1) & SXE2_RX_DESC_HDR_LEN_MASK) >> SXE2_RX_DESC_HDR_LEN_SHIFT)
+
+#define SXE2_RX_DESC_SPH_SHIFT (57)
+#define SXE2_RX_DESC_SPH_MASK (0x1ULL << SXE2_RX_DESC_SPH_SHIFT)
+#define SXE2_RX_DESC_SPH_VAL_GET(qw1) \
+ (((qw1) & SXE2_RX_DESC_SPH_MASK) >> SXE2_RX_DESC_SPH_SHIFT)
+
+#define SXE2_RX_DESC_PTYPE_SHIFT (16)
+#define SXE2_RX_DESC_PTYPE_MASK (0x3FFULL << SXE2_RX_DESC_PTYPE_SHIFT)
+#define SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT (0x3FFULL)
+#define SXE2_RX_DESC_PTYPE_VAL_GET(qw1) \
+ (((qw1) & SXE2_RX_DESC_PTYPE_MASK) >> SXE2_RX_DESC_PTYPE_SHIFT)
+
+#define SXE2_RX_DESC_FILTER_STATUS_SHIFT (32)
+#define SXE2_RX_DESC_FILTER_STATUS_MASK (0xFFFFUL)
+
+#define SXE2_RX_DESC_LROCNT_SHIFT (0)
+#define SXE2_RX_DESC_LROCNT_MASK (0xF)
+
+enum sxe2_rx_desc_status_shift {
+ SXE2_RX_DESC_STATUS_DD_SHIFT = 0,
+ SXE2_RX_DESC_STATUS_EOP_SHIFT = 1,
+ SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT = 2,
+
+ SXE2_RX_DESC_STATUS_L3L4_P_SHIFT = 3,
+ SXE2_RX_DESC_STATUS_CRCP_SHIFT = 4,
+ SXE2_RX_DESC_STATUS_SECP_SHIFT = 5,
+ SXE2_RX_DESC_STATUS_SECTAG_SHIFT = 6,
+ SXE2_RX_DESC_STATUS_SECE_SHIFT = 26,
+ SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 27,
+ SXE2_RX_DESC_STATUS_UMBCAST_SHIFT = 28,
+ SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT = 30,
+ SXE2_RX_DESC_STATUS_LPBK_SHIFT = 59,
+ SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT = 60,
+ SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT = 61,
+ SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT = 62,
+ SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT = 63,
+};
+
+#define SXE2_RX_DESC_STATUS_DD_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_DD_SHIFT)
+#define SXE2_RX_DESC_STATUS_EOP_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_EOP_SHIFT)
+#define SXE2_RX_DESC_STATUS_L2TAG1_P_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_L2TAG1_P_SHIFT)
+#define SXE2_RX_DESC_STATUS_L3L4_P_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_L3L4_P_SHIFT)
+#define SXE2_RX_DESC_STATUS_CRCP_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_CRCP_SHIFT)
+#define SXE2_RX_DESC_STATUS_SECP_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_SECP_SHIFT)
+#define SXE2_RX_DESC_STATUS_SECTAG_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_SECTAG_SHIFT)
+#define SXE2_RX_DESC_STATUS_SECE_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_SECE_SHIFT)
+#define SXE2_RX_DESC_STATUS_EXT_UDP_0_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_EXT_UDP_0_SHIFT)
+#define SXE2_RX_DESC_STATUS_UMBCAST_MASK \
+ (0x3ULL << SXE2_RX_DESC_STATUS_UMBCAST_SHIFT)
+#define SXE2_RX_DESC_STATUS_PHY_PORT_MASK \
+ (0x3ULL << SXE2_RX_DESC_STATUS_PHY_PORT_SHIFT)
+#define SXE2_RX_DESC_STATUS_LPBK_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_LPBK_SHIFT)
+#define SXE2_RX_DESC_STATUS_IPV6_EXADD_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_IPV6_EXADD_SHIFT)
+#define SXE2_RX_DESC_STATUS_RSS_VLD_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_RSS_VLD_SHIFT)
+#define SXE2_RX_DESC_STATUS_ACL_HIT_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_ACL_HIT_SHIFT)
+#define SXE2_RX_DESC_STATUS_INT_UDP_0_MASK \
+ (0x1ULL << SXE2_RX_DESC_STATUS_INT_UDP_0_SHIFT)
+
+enum sxe2_rx_desc_umbcast_val {
+ SXE2_RX_DESC_STATUS_UNICAST = 0,
+ SXE2_RX_DESC_STATUS_MUTICAST = 1,
+ SXE2_RX_DESC_STATUS_BOARDCAST = 2,
+};
+
+#define SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qw1) \
+ (((qw1) & SXE2_RX_DESC_STATUS_UMBCAST_MASK) >> SXE2_RX_DESC_STATUS_UMBCAST_SHIFT)
+
+enum sxe2_rx_desc_error_shift {
+ SXE2_RX_DESC_ERROR_RXE_SHIFT = 7,
+ SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT = 8,
+ SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT = 9,
+
+ SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT = 10,
+
+ SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT = 11,
+
+ SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT = 12,
+ SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT = 13,
+ SXE2_RX_DESC_ERROR_SEC_ERR_SHIFT = 14,
+};
+
+#define SXE2_RX_DESC_ERROR_RXE_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_RXE_SHIFT)
+#define SXE2_RX_DESC_ERROR_PKT_ECC_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_PKT_ECC_SHIFT)
+#define SXE2_RX_DESC_ERROR_PKT_HBO_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_PKT_HBO_SHIFT)
+#define SXE2_RX_DESC_ERROR_CSUM_IPE_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_IPE_SHIFT)
+#define SXE2_RX_DESC_ERROR_CSUM_L4_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_L4_SHIFT)
+#define SXE2_RX_DESC_ERROR_CSUM_EIP_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_CSUM_EIP_SHIFT)
+#define SXE2_RX_DESC_ERROR_OVERSIZE_MASK \
+ (0x1ULL << SXE2_RX_DESC_ERROR_OVERSIZE_SHIFT)
+
+#define SXE2_RX_DESC_QW1_ERRORS_MASK \
+ (SXE2_RX_DESC_ERROR_CSUM_IPE_MASK | \
+ SXE2_RX_DESC_ERROR_CSUM_L4_MASK | \
+ SXE2_RX_DESC_ERROR_CSUM_EIP_MASK)
+
+enum sxe2_rx_desc_ext_status_shift {
+ SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 4,
+ SXE2_RX_DESC_EXT_STATUS_RSVD = 5,
+ SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT = 7,
+ SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT = 13,
+};
+#define SXE2_RX_DESC_EXT_STATUS_L2TAG2P_MASK \
+ (0x1ULL << SXE2_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)
+#define SXE2_RX_DESC_EXT_STATUS_PKT_REE_MASK \
+ (0x3FULL << SXE2_RX_DESC_EXT_STATUS_PKT_REE_SHIFT)
+#define SXE2_RX_DESC_EXT_STATUS_ROCE_MASK \
+ (0x1ULL << SXE2_RX_DESC_EXT_STATUS_ROCE_SHIFT)
+
+enum sxe2_rx_desc_ipsec_shift {
+ SXE2_RX_DESC_IPSEC_PKT_S = 21,
+ SXE2_RX_DESC_IPSEC_ENGINE_S = 22,
+ SXE2_RX_DESC_IPSEC_MODE_S = 23,
+ SXE2_RX_DESC_IPSEC_STATUS_S = 24,
+
+ SXE2_RX_DESC_IPSEC_LAST
+};
+
+enum sxe2_rx_desc_ipsec_status {
+ SXE2_RX_DESC_IPSEC_STATUS_SUCCESS = 0x0,
+ SXE2_RX_DESC_IPSEC_STATUS_PKG_OVER_2K = 0x1,
+ SXE2_RX_DESC_IPSEC_STATUS_SPI_IP_INVALID = 0x2,
+ SXE2_RX_DESC_IPSEC_STATUS_SA_INVALID = 0x3,
+ SXE2_RX_DESC_IPSEC_STATUS_NOT_ALIGN = 0x4,
+ SXE2_RX_DESC_IPSEC_STATUS_ICV_ERROR = 0x5,
+ SXE2_RX_DESC_IPSEC_STATUS_BY_PASSH = 0x6,
+ SXE2_RX_DESC_IPSEC_STATUS_MAC_BY_PASSH = 0x7,
+};
+
+#define SXE2_RX_DESC_IPSEC_PKT_MASK \
+ (0x1ULL << SXE2_RX_DESC_IPSEC_PKT_S)
+#define SXE2_RX_DESC_IPSEC_STATUS_MASK (0x7)
+#define SXE2_RX_DESC_IPSEC_STATUS_VAL_GET(qw2) \
+ (((qw2) >> SXE2_RX_DESC_IPSEC_STATUS_S) & \
+ SXE2_RX_DESC_IPSEC_STATUS_MASK)
+
+#define SXE2_RX_ERR_BITS 0x3f
+
+#define SXE2_RX_QUEUE_CHECK_INTERVAL_NUM 4
+
+#define SXE2_RX_DESC_RING_ALIGN \
+ (SXE2_ALIGN / sizeof(union sxe2_rx_desc))
+
+#define SXE2_RX_RING_SIZE \
+ ((SXE2_MAX_RING_DESC + SXE2_RX_PKTS_BURST_BATCH_NUM) * sizeof(union sxe2_rx_desc))
+
+#define SXE2_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128)
+
+#endif /* __SXE2_TXRX_COMMON_H__ */
diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h
new file mode 100644
index 0000000000..f45e33f9b7
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_poll.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef SXE2_TXRX_POLL_H
+#define SXE2_TXRX_POLL_H
+
+#include "sxe2_queue.h"
+
+uint16_t sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+uint16_t sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+uint16_t sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+#endif /* __SXE2_TXRX_POLL_H__ */
diff --git a/drivers/net/sxe2/sxe2_vsi.c b/drivers/net/sxe2/sxe2_vsi.c
new file mode 100644
index 0000000000..baaa20c02e
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_vsi.c
@@ -0,0 +1,214 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_os.h>
+#include <rte_tailq.h>
+#include <rte_malloc.h>
+#include "sxe2_ethdev.h"
+#include "sxe2_vsi.h"
+#include "sxe2_common_log.h"
+#include "sxe2_cmd_chnl.h"
+
+void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter,
+ struct sxe2_drv_vsi_caps *vsi_caps)
+{
+ adapter->vsi_ctxt.dpdk_vsi_id = vsi_caps->dpdk_vsi_id;
+ adapter->vsi_ctxt.kernel_vsi_id = vsi_caps->kernel_vsi_id;
+ adapter->vsi_ctxt.vsi_type = vsi_caps->vsi_type;
+}
+
+static struct sxe2_vsi *
+sxe2_vsi_node_alloc(struct sxe2_adapter *adapter, uint16_t vsi_id, uint16_t vsi_type)
+{
+ struct sxe2_vsi *vsi = NULL;
+ vsi = rte_zmalloc("sxe2_vsi", sizeof(*vsi), 0);
+ if (vsi == NULL) {
+ PMD_LOG_ERR(DRV, "Failed to malloc vf vsi struct.");
+ goto l_end;
+ }
+ vsi->adapter = adapter;
+
+ vsi->vsi_id = vsi_id;
+ vsi->vsi_type = vsi_type;
+
+l_end:
+ return vsi;
+}
+
+static void sxe2_vsi_queues_num_set(struct sxe2_vsi *vsi, uint16_t num_queues, uint16_t base_idx)
+{
+ vsi->txqs.q_cnt = num_queues;
+ vsi->rxqs.q_cnt = num_queues;
+ vsi->txqs.base_idx_in_func = base_idx;
+ vsi->rxqs.base_idx_in_func = base_idx;
+}
+
+static void sxe2_vsi_queues_cfg(struct sxe2_vsi *vsi)
+{
+ vsi->txqs.depth = vsi->txqs.depth ? : SXE2_DFLT_NUM_TX_DESC;
+ vsi->rxqs.depth = vsi->rxqs.depth ? : SXE2_DFLT_NUM_RX_DESC;
+
+ PMD_LOG_INFO(DRV, "vsi:%u queue_cnt:%u txq_depth:%u rxq_depth:%u.",
+ vsi->vsi_id, vsi->txqs.q_cnt,
+ vsi->txqs.depth, vsi->rxqs.depth);
+}
+
+static void sxe2_vsi_irqs_cfg(struct sxe2_vsi *vsi, uint16_t num_irqs, uint16_t base_idx)
+{
+ vsi->irqs.avail_cnt = num_irqs;
+ vsi->irqs.base_idx_in_pf = base_idx;
+}
+
+static struct sxe2_vsi *sxe2_vsi_node_create(struct sxe2_adapter *adapter,
+ uint16_t vsi_id,
+ uint16_t vsi_type)
+{
+ struct sxe2_vsi *vsi = NULL;
+ uint16_t num_queues = 0;
+ uint16_t queue_base_idx = 0;
+ uint16_t num_irqs = 0;
+ uint16_t irq_base_idx = 0;
+
+ vsi = sxe2_vsi_node_alloc(adapter, vsi_id, vsi_type);
+ if (vsi == NULL)
+ goto l_end;
+
+ if (vsi_type == SXE2_VSI_T_DPDK_PF ||
+ vsi_type == SXE2_VSI_T_DPDK_VF) {
+ num_queues = adapter->q_ctxt.qp_cnt_assign;
+ queue_base_idx = adapter->q_ctxt.base_idx_in_pf;
+
+ num_irqs = adapter->irq_ctxt.max_cnt_hw;
+ irq_base_idx = adapter->irq_ctxt.base_idx_in_func;
+ } else if (vsi_type == SXE2_VSI_T_DPDK_ESW) {
+ num_queues = 1;
+ num_irqs = 1;
+ }
+
+ sxe2_vsi_queues_num_set(vsi, num_queues, queue_base_idx);
+
+ sxe2_vsi_queues_cfg(vsi);
+
+ sxe2_vsi_irqs_cfg(vsi, num_irqs, irq_base_idx);
+
+l_end:
+ return vsi;
+}
+
+static void sxe2_vsi_node_free(struct sxe2_vsi *vsi)
+{
+ if (!vsi)
+ return;
+
+ rte_free(vsi);
+ vsi = NULL;
+}
+
+static int32_t sxe2_vsi_destroy(struct sxe2_adapter *adapter, struct sxe2_vsi *vsi)
+{
+ int32_t ret = 0;
+
+ if (vsi == NULL) {
+ PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy.");
+ goto l_end;
+ }
+
+ if (vsi->vsi_type != SXE2_VSI_T_DPDK_ESW) {
+ ret = sxe2_drv_vsi_del(adapter, vsi);
+ if (ret) {
+ PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret);
+ if (ret == -EPERM)
+ goto l_free;
+ goto l_end;
+ }
+ }
+
+l_free:
+ rte_free(vsi);
+ vsi = NULL;
+
+ PMD_LOG_DEBUG(DRV, "vsi destroyed.");
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_main_vsi_create(struct sxe2_adapter *adapter)
+{
+ int32_t ret = 0;
+ uint16_t vsi_id = adapter->vsi_ctxt.dpdk_vsi_id;
+ uint16_t vsi_type = adapter->vsi_ctxt.vsi_type;
+ bool is_reused = (vsi_id != SXE2_INVALID_VSI_ID);
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (!is_reused)
+ vsi_type = SXE2_VSI_T_DPDK_PF;
+ else
+ PMD_LOG_INFO(DRV, "Reusing existing HW vsi_id:%u", vsi_id);
+
+ adapter->vsi_ctxt.main_vsi = sxe2_vsi_node_create(adapter, vsi_id, vsi_type);
+ if (adapter->vsi_ctxt.main_vsi == NULL) {
+ PMD_LOG_ERR(DRV, "Failed to create vsi struct, ret=%d", ret);
+ ret = -ENOMEM;
+ goto l_end;
+ }
+
+ if (!is_reused) {
+ ret = sxe2_drv_vsi_add(adapter, adapter->vsi_ctxt.main_vsi);
+ if (ret) {
+ PMD_LOG_ERR(DRV, "Failed to config vsi to fw, ret=%d", ret);
+ goto l_free_vsi;
+ }
+
+ adapter->vsi_ctxt.dpdk_vsi_id = adapter->vsi_ctxt.main_vsi->vsi_id;
+ PMD_LOG_DEBUG(DRV, "Successfully created and synced new VSI");
+ }
+
+ goto l_end;
+
+l_free_vsi:
+ sxe2_vsi_node_free(adapter->vsi_ctxt.main_vsi);
+ adapter->vsi_ctxt.main_vsi = NULL;
+l_end:
+ return ret;
+}
+
+int32_t sxe2_vsi_init(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ int32_t ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = sxe2_main_vsi_create(adapter);
+ if (ret) {
+ PMD_LOG_ERR(DRV, "Failed to create main VSI, ret=%d", ret);
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
+
+void sxe2_vsi_uninit(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ int32_t ret;
+
+ if (adapter->vsi_ctxt.main_vsi == NULL) {
+ PMD_LOG_INFO(DRV, "vsi is not created, no need to destroy.");
+ goto l_end;
+ }
+
+ ret = sxe2_vsi_destroy(adapter, adapter->vsi_ctxt.main_vsi);
+ if (ret) {
+ PMD_LOG_ERR(DRV, "Failed to del vsi from fw, ret=%d", ret);
+ goto l_end;
+ }
+
+ PMD_LOG_DEBUG(DRV, "vsi destroyed.");
+
+l_end:
+ return;
+}
diff --git a/drivers/net/sxe2/sxe2_vsi.h b/drivers/net/sxe2/sxe2_vsi.h
new file mode 100644
index 0000000000..e712f738f1
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_vsi.h
@@ -0,0 +1,204 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __sxe2_VSI_H__
+#define __sxe2_VSI_H__
+#include <rte_os.h>
+#include "sxe2_drv_cmd.h"
+
+#define SXE2_MAX_BOND_MEMBER_CNT 4
+
+enum sxe2_drv_type {
+ SXE2_MAX_DRV_TYPE_DPDK = 0,
+ SXE2_MAX_DRV_TYPE_KERNEL,
+ SXE2_MAX_DRV_TYPE_CNT,
+};
+
+#define SXE2_MAX_USER_PRIORITY (8)
+
+#define SXE2_DFLT_NUM_RX_DESC 512
+#define SXE2_DFLT_NUM_TX_DESC 512
+
+#define SXE2_DFLT_Q_NUM_OTHER_VSI 1
+#define SXE2_INVALID_VSI_ID 0xFFFF
+
+struct sxe2_adapter;
+struct sxe2_drv_vsi_caps;
+struct rte_eth_dev;
+
+enum sxe2_vsi_type {
+ SXE2_VSI_T_PF = 0,
+ SXE2_VSI_T_VF,
+ SXE2_VSI_T_CTRL,
+ SXE2_VSI_T_LB,
+ SXE2_VSI_T_MACVLAN,
+ SXE2_VSI_T_ESW,
+ SXE2_VSI_T_RDMA,
+ SXE2_VSI_T_DPDK_PF,
+ SXE2_VSI_T_DPDK_VF,
+ SXE2_VSI_T_DPDK_ESW,
+ SXE2_VSI_T_NR,
+};
+
+struct sxe2_queue_info {
+ uint16_t base_idx_in_nic;
+ uint16_t base_idx_in_func;
+ uint16_t q_cnt;
+ uint16_t depth;
+ uint16_t rx_buf_len;
+ uint16_t max_frame_len;
+ struct sxe2_queue **queues;
+};
+
+struct sxe2_vsi_irqs {
+ uint16_t avail_cnt;
+ uint16_t used_cnt;
+ uint16_t base_idx_in_pf;
+};
+
+enum {
+ sxe2_VSI_DOWN = 0,
+ sxe2_VSI_CLOSE,
+ sxe2_VSI_DISABLE,
+ sxe2_VSI_MAX,
+};
+
+struct sxe2_stats {
+ uint64_t ipackets;
+
+ uint64_t opackets;
+
+ uint64_t ibytes;
+
+ uint64_t obytes;
+
+ uint64_t ierrors;
+
+ uint64_t imissed;
+
+ uint64_t rx_out_of_buffer;
+ uint64_t rx_qblock_drop;
+
+ uint64_t tx_frame_good;
+ uint64_t rx_frame_good;
+ uint64_t rx_crc_errors;
+ uint64_t tx_bytes_good;
+ uint64_t rx_bytes_good;
+ uint64_t tx_multicast_good;
+ uint64_t tx_broadcast_good;
+ uint64_t rx_multicast_good;
+ uint64_t rx_broadcast_good;
+ uint64_t rx_len_errors;
+ uint64_t rx_out_of_range_errors;
+ uint64_t rx_oversize_pkts_phy;
+ uint64_t rx_symbol_err;
+ uint64_t rx_pause_frame;
+ uint64_t tx_pause_frame;
+
+ uint64_t rx_discards_phy;
+ uint64_t rx_discards_ips_phy;
+
+ uint64_t tx_dropped_link_down;
+ uint64_t rx_undersize_good;
+ uint64_t rx_runt_error;
+ uint64_t tx_bytes_good_bad;
+ uint64_t tx_frame_good_bad;
+ uint64_t rx_jabbers;
+ uint64_t rx_size_64;
+ uint64_t rx_size_65_127;
+ uint64_t rx_size_128_255;
+ uint64_t rx_size_256_511;
+ uint64_t rx_size_512_1023;
+ uint64_t rx_size_1024_1522;
+ uint64_t rx_size_1523_max;
+ uint64_t rx_pcs_symbol_err_phy;
+ uint64_t rx_corrected_bits_phy;
+ uint64_t rx_err_lane_0_phy;
+ uint64_t rx_err_lane_1_phy;
+ uint64_t rx_err_lane_2_phy;
+ uint64_t rx_err_lane_3_phy;
+
+ uint64_t rx_prio_buf_discard[SXE2_MAX_USER_PRIORITY];
+ uint64_t rx_illegal_bytes;
+ uint64_t rx_oversize_good;
+ uint64_t tx_unicast;
+ uint64_t tx_broadcast;
+ uint64_t tx_multicast;
+ uint64_t tx_vlan_packet_good;
+ uint64_t tx_size_64;
+ uint64_t tx_size_65_127;
+ uint64_t tx_size_128_255;
+ uint64_t tx_size_256_511;
+ uint64_t tx_size_512_1023;
+ uint64_t tx_size_1024_1522;
+ uint64_t tx_size_1523_max;
+ uint64_t tx_underflow_error;
+ uint64_t rx_byte_good_bad;
+ uint64_t rx_frame_good_bad;
+ uint64_t rx_unicast_good;
+ uint64_t rx_vlan_packets;
+
+ uint64_t prio_xoff_rx[SXE2_MAX_USER_PRIORITY];
+ uint64_t prio_xon_rx[SXE2_MAX_USER_PRIORITY];
+ uint64_t prio_xon_tx[SXE2_MAX_USER_PRIORITY];
+ uint64_t prio_xoff_tx[SXE2_MAX_USER_PRIORITY];
+ uint64_t prio_xon_2_xoff[SXE2_MAX_USER_PRIORITY];
+
+ uint64_t rx_vsi_unicast_packets;
+ uint64_t rx_vsi_bytes;
+ uint64_t tx_vsi_unicast_packets;
+ uint64_t tx_vsi_bytes;
+ uint64_t rx_vsi_multicast_packets;
+ uint64_t tx_vsi_multicast_packets;
+ uint64_t rx_vsi_broadcast_packets;
+ uint64_t tx_vsi_broadcast_packets;
+
+ uint64_t rx_sw_unicast_packets;
+ uint64_t rx_sw_broadcast_packets;
+ uint64_t rx_sw_multicast_packets;
+ uint64_t rx_sw_drop_packets;
+ uint64_t rx_sw_drop_bytes;
+};
+
+struct sxe2_vsi_stats {
+ struct sxe2_stats vsi_sw_stats;
+ struct sxe2_stats vsi_sw_stats_prev;
+ struct sxe2_stats vsi_hw_stats;
+ struct sxe2_stats stats;
+};
+
+struct sxe2_vsi {
+ TAILQ_ENTRY(sxe2_vsi) next;
+ struct sxe2_adapter *adapter;
+ uint16_t vsi_id;
+ uint16_t vsi_type;
+ struct sxe2_vsi_irqs irqs;
+ struct sxe2_queue_info txqs;
+ struct sxe2_queue_info rxqs;
+ uint16_t budget;
+ struct sxe2_vsi_stats vsi_stats;
+};
+
+TAILQ_HEAD(sxe2_vsi_list_head, sxe2_vsi);
+
+struct sxe2_vsi_context {
+ uint16_t func_id;
+ uint16_t dpdk_vsi_id;
+ uint16_t kernel_vsi_id;
+ uint16_t vsi_type;
+
+ uint16_t bond_member_kernel_vsi_id[SXE2_MAX_BOND_MEMBER_CNT];
+ uint16_t bond_member_dpdk_vsi_id[SXE2_MAX_BOND_MEMBER_CNT];
+
+ struct sxe2_vsi *main_vsi;
+};
+
+void sxe2_sw_vsi_ctx_hw_cap_set(struct sxe2_adapter *adapter,
+ struct sxe2_drv_vsi_caps *vsi_caps);
+
+int32_t sxe2_vsi_init(struct rte_eth_dev *dev);
+
+void sxe2_vsi_uninit(struct rte_eth_dev *dev);
+
+#endif /* __SXE2_VSI_H__ */
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 06/11] drivers: support PCI BAR mapping
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (4 preceding siblings ...)
2026-05-16 7:46 ` [PATCH v15 05/11] drivers: add base driver probe skeleton liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 07/11] common/sxe2: add ioctl interface for DMA map and unmap liujie5
` (4 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Implement PCI BAR (Base Address Register) mapping and unmapping
logic to enable MMIO (Memory Mapped I/O) access to hardware
registers.
The driver retrieves the BAR0 virtual address from the PCI resource
during the probing phase. This mapping is used for subsequent
register-level operations. Proper cleanup is implemented in the
device close path.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/sxe2_common.c | 8 +-
drivers/common/sxe2/sxe2_ioctl_chnl.c | 38 ++-
drivers/net/sxe2/sxe2_ethdev.c | 326 +++++++++++++++++++++++++-
drivers/net/sxe2/sxe2_ethdev.h | 18 ++
4 files changed, 381 insertions(+), 9 deletions(-)
diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c
index 5cf43dd3b7..c309a890ad 100644
--- a/drivers/common/sxe2/sxe2_common.c
+++ b/drivers/common/sxe2/sxe2_common.c
@@ -185,7 +185,7 @@ static int32_t sxe2_common_device_setup(struct sxe2_common_device *cdev)
ret = sxe2_drv_dev_handshke(cdev);
if (ret != 0) {
- PMD_LOG_ERR(COM, "Handshark failed, ret=%d", ret);
+ PMD_LOG_ERR(COM, "handshake failed, ret=%d", ret);
goto l_close_dev;
}
@@ -605,18 +605,18 @@ static void sxe2_common_pci_init(void)
return;
}
-static bool sxe2_commoin_inited;
+static bool sxe2_common_inited;
RTE_EXPORT_INTERNAL_SYMBOL(sxe2_common_init)
void
sxe2_common_init(void)
{
- if (sxe2_commoin_inited)
+ if (sxe2_common_inited)
goto l_end;
pthread_mutex_init(&sxe2_common_devices_list_lock, NULL);
sxe2_common_pci_init();
- sxe2_commoin_inited = true;
+ sxe2_common_inited = true;
l_end:
return;
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c
index 11e24d04d9..48acc41509 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl.c
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c
@@ -133,7 +133,7 @@ sxe2_drv_dev_handshke(struct sxe2_common_device *cdev)
goto l_end;
}
- PMD_LOG_DEBUG(COM, "Open fd=%d to handshark with kernel", cmd_fd);
+ PMD_LOG_DEBUG(COM, "Open fd=%d to handshake with kernel", cmd_fd);
memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_cmd_common_hdr));
cmd_params.dpdk_ver = SXE2_COM_VER;
@@ -142,7 +142,7 @@ sxe2_drv_dev_handshke(struct sxe2_common_device *cdev)
(void)pthread_mutex_lock(&cdev->config.lock);
ret = ioctl(cmd_fd, SXE2_COM_CMD_HANDSHAKE, &cmd_params);
if (ret < 0) {
- PMD_LOG_ERR(COM, "Failed to handshark, fd=%d, ret=%d, err:%s",
+ PMD_LOG_ERR(COM, "Failed to handshake, fd=%d, ret=%d, err:%s",
cmd_fd, ret, strerror(errno));
ret = -errno;
(void)pthread_mutex_unlock(&cdev->config.lock);
@@ -159,6 +159,40 @@ sxe2_drv_dev_handshke(struct sxe2_common_device *cdev)
return ret;
}
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_mmap)
+void
+*sxe2_drv_dev_mmap(struct sxe2_common_device *cdev, uint8_t bar_idx, uint64_t len, uint64_t offset)
+{
+ int32_t cmd_fd = 0;
+ void *virt = NULL;
+
+ if (cdev->config.kernel_reset) {
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_err;
+ }
+
+ cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev);
+ if (cmd_fd < 0) {
+ PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd);
+ goto l_err;
+ }
+
+ PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"",
+ bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset));
+
+ virt = mmap(NULL, len, PROT_READ | PROT_WRITE,
+ MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset));
+ if (virt == MAP_FAILED) {
+ PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s",
+ cmd_fd, len, offset, strerror(errno));
+ goto l_err;
+ }
+
+ return virt;
+l_err:
+ return NULL;
+}
+
RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_munmap)
int32_t
sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len)
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
index f0bdda38a7..204add9c98 100644
--- a/drivers/net/sxe2/sxe2_ethdev.c
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -54,6 +54,27 @@ static const struct rte_pci_id pci_id_sxe2_tbl[] = {
{ .vendor_id = 0, },
};
+static struct sxe2_pci_map_addr_info sxe2_net_map_addr_info_pf[SXE2_PCI_MAP_RES_MAX_COUNT] = {
+ [SXE2_PCI_MAP_RES_INVALID] = {.addr_base = 0,
+ .bar_idx = 0,
+ .reg_width = 0},
+ [SXE2_PCI_MAP_RES_DOORBELL_TX] = {.addr_base = SXE2_TXQ_LEGACY_DBLL(0),
+ .bar_idx = 0,
+ .reg_width = 4},
+ [SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL] = {.addr_base = SXE2_RXQ_TAIL(0),
+ .bar_idx = 0,
+ .reg_width = 4},
+ [SXE2_PCI_MAP_RES_IRQ_DYN] = {.addr_base = SXE2_VF_DYN_CTL(0),
+ .bar_idx = 0,
+ .reg_width = 4},
+ [SXE2_PCI_MAP_RES_IRQ_ITR] = {.addr_base = SXE2_VF_INT_ITR(0, 0),
+ .bar_idx = 0,
+ .reg_width = 4},
+ [SXE2_PCI_MAP_RES_IRQ_MSIX] = {.addr_base = SXE2_BAR4_MSIX_CTL(0),
+ .bar_idx = 4,
+ .reg_width = 10},
+};
+
static int32_t sxe2_dev_configure(struct rte_eth_dev *dev)
{
int32_t ret = 0;
@@ -151,6 +172,7 @@ static int32_t sxe2_dev_close(struct rte_eth_dev *dev)
(void)sxe2_dev_stop(dev);
sxe2_vsi_uninit(dev);
+ sxe2_dev_pci_map_uinit(dev);
return 0;
}
@@ -287,6 +309,31 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = {
.dev_infos_get = sxe2_dev_infos_get,
};
+struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type)
+{
+ struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt;
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ uint8_t bar_idx = SXE2_PCI_MAP_BAR_INVALID;
+ uint8_t i;
+
+ bar_idx = map_ctxt->addr_info[res_type].bar_idx;
+ if (bar_idx == SXE2_PCI_MAP_BAR_INVALID) {
+ PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type);
+ goto l_end;
+ }
+
+ for (i = 0; i < map_ctxt->bar_cnt; i++) {
+ if (bar_idx == map_ctxt->bar_info[i].bar_idx) {
+ bar_info = &map_ctxt->bar_info[i];
+ break;
+ }
+ }
+
+l_end:
+ return bar_info;
+}
+
static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter,
struct sxe2_drv_dev_caps_resp *dev_caps)
{
@@ -354,6 +401,69 @@ static int32_t sxe2_dev_caps_get(struct sxe2_adapter *adapter)
return ret;
}
+int32_t sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type, uint64_t org_len, uint64_t org_offset)
+{
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ struct sxe2_pci_map_segment_info *seg_info = NULL;
+ void *map_addr = NULL;
+ int32_t ret = 0;
+ size_t page_size = 0;
+ size_t aligned_len = 0;
+ size_t page_inner_offset = 0;
+ off_t aligned_offset = 0;
+ uint8_t i = 0;
+
+ if (org_len == 0) {
+ PMD_DEV_LOG_ERR(adapter, INIT, "Invalid length, ori_len = 0");
+ ret = -EFAULT;
+ goto l_end;
+ }
+
+ bar_info = sxe2_dev_get_bar_info(adapter, res_type);
+ if (!bar_info) {
+ PMD_LOG_ERR(INIT, "Failed to get bar info, res_type=[%d]", res_type);
+ ret = -EFAULT;
+ goto l_end;
+ }
+ seg_info = bar_info->seg_info;
+
+ page_size = rte_mem_page_size();
+
+ aligned_offset = RTE_ALIGN_FLOOR(org_offset, page_size);
+ page_inner_offset = org_offset - aligned_offset;
+ aligned_len = RTE_ALIGN(page_inner_offset + org_len, page_size);
+
+ map_addr = sxe2_drv_dev_mmap(adapter->cdev, bar_info->bar_idx,
+ aligned_len, aligned_offset);
+ if (!map_addr) {
+ PMD_LOG_ERR(INIT, "Failed to mmap BAR space, type=%d, len=%" PRIu64
+ ", offset=%" PRIu64 ", page_size=%zu",
+ res_type, org_len, org_offset, page_size);
+ ret = -EFAULT;
+ goto l_end;
+ }
+
+ for (i = 0; i < bar_info->map_cnt; i++) {
+ if (seg_info[i].type != SXE2_PCI_MAP_RES_INVALID)
+ continue;
+ seg_info[i].type = res_type;
+ seg_info[i].addr = map_addr;
+ seg_info[i].page_inner_offset = page_inner_offset;
+ seg_info[i].len = aligned_len;
+ break;
+ }
+ if (i == bar_info->map_cnt) {
+ PMD_LOG_ERR(INIT, "No memory to save resource, res_type=%d", res_type);
+ ret = -ENOMEM;
+ sxe2_drv_dev_munmap(adapter->cdev, map_addr, aligned_len);
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
+
static int32_t sxe2_hw_init(struct rte_eth_dev *dev)
{
struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
@@ -368,6 +478,55 @@ static int32_t sxe2_hw_init(struct rte_eth_dev *dev)
return ret;
}
+int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, uint32_t res_type,
+ uint32_t item_cnt, uint32_t item_base)
+{
+ struct sxe2_pci_map_addr_info *addr_info = NULL;
+ int32_t ret = 0;
+
+ addr_info = &adapter->map_ctxt.addr_info[res_type];
+ if (!addr_info || addr_info->bar_idx == SXE2_PCI_MAP_BAR_INVALID) {
+ PMD_DEV_LOG_ERR(adapter, INIT, "Invalid bar index with resource type %d", res_type);
+ ret = -EFAULT;
+ goto l_end;
+ }
+
+ ret = sxe2_dev_pci_seg_map(adapter, res_type, item_cnt * addr_info->reg_width,
+ addr_info->addr_base + item_base * addr_info->reg_width);
+ if (ret != 0) {
+ PMD_DEV_LOG_ERR(adapter, INIT, "Failed to map resource, res_type=%d", res_type);
+ goto l_end;
+ }
+l_end:
+ return ret;
+}
+
+void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, uint32_t res_type)
+{
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ struct sxe2_pci_map_segment_info *seg_info = NULL;
+ uint32_t i = 0;
+
+ bar_info = sxe2_dev_get_bar_info(adapter, res_type);
+ if (bar_info == NULL) {
+ PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]", res_type);
+ goto l_end;
+ }
+ seg_info = bar_info->seg_info;
+
+ for (i = 0; i < bar_info->map_cnt; i++) {
+ if (res_type == seg_info[i].type) {
+ (void)sxe2_drv_dev_munmap(adapter->cdev, seg_info[i].addr,
+ seg_info[i].len);
+ memset(&seg_info[i], 0, sizeof(struct sxe2_pci_map_segment_info));
+ break;
+ }
+ }
+
+l_end:
+ return;
+}
+
static int32_t sxe2_dev_info_init(struct rte_eth_dev *dev)
{
struct sxe2_adapter *adapter =
@@ -408,6 +567,157 @@ static int32_t sxe2_dev_info_init(struct rte_eth_dev *dev)
return ret;
}
+int32_t sxe2_dev_pci_map_init(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+ struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt;
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ struct sxe2_pci_map_segment_info *seg_info = NULL;
+ uint16_t txq_cnt = adapter->q_ctxt.qp_cnt_assign;
+ uint16_t txq_base = adapter->q_ctxt.base_idx_in_pf;
+ uint16_t rxq_cnt = adapter->q_ctxt.qp_cnt_assign;
+ uint16_t irq_cnt = adapter->irq_ctxt.max_cnt_hw;
+ uint16_t irq_base = adapter->irq_ctxt.base_idx_in_func;
+ uint16_t rxq_base = adapter->q_ctxt.base_idx_in_pf;
+ int32_t ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ adapter->dev_info.dev_data = dev->data;
+
+ if (!pci_dev->mem_resource[0].phys_addr) {
+ PMD_LOG_ERR(INIT, "Physical address not scanned");
+ ret = -ENXIO;
+ goto l_end;
+ }
+
+ map_ctxt->bar_cnt = 2;
+
+ bar_info = rte_zmalloc(NULL, sizeof(*bar_info) * map_ctxt->bar_cnt, 0);
+ if (!bar_info) {
+ PMD_LOG_ERR(INIT, "Failed to alloc bar_info");
+ ret = -ENOMEM;
+ goto l_end;
+ }
+ bar_info[0].bar_idx = 0;
+ bar_info[0].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT;
+ seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[0].map_cnt, 0);
+ if (!seg_info) {
+ PMD_LOG_ERR(INIT, "Failed to alloc seg_info");
+ ret = -ENOMEM;
+ goto l_free_bar;
+ }
+
+ bar_info[0].seg_info = seg_info;
+
+ bar_info[1].bar_idx = 4;
+ bar_info[1].map_cnt = SXE2_PCI_MAP_RES_MAX_COUNT;
+ seg_info = rte_zmalloc(NULL, sizeof(*seg_info) * bar_info[1].map_cnt, 0);
+ if (!seg_info) {
+ PMD_LOG_ERR(INIT, "Failed to alloc seg_info");
+ ret = -ENOMEM;
+ goto l_free_seg0;
+ }
+
+ bar_info[1].seg_info = seg_info;
+ map_ctxt->bar_info = bar_info;
+
+ map_ctxt->addr_info = sxe2_net_map_addr_info_pf;
+
+ ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX,
+ txq_cnt, txq_base);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to map txq doorbell addr, ret=%d", ret);
+ goto l_free_seg1;
+ }
+
+ ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL,
+ rxq_cnt, rxq_base);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to map rxq tail doorbell addr, ret=%d", ret);
+ goto l_free_txq;
+ }
+
+ ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_DYN,
+ irq_cnt, irq_base);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to map irq dyn addr, ret=%d", ret);
+ goto l_free_rxq_tail;
+ }
+
+ ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_ITR,
+ irq_cnt, irq_base);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to map irq itr addr, ret=%d", ret);
+ goto l_free_irq_dyn;
+ }
+
+ ret = sxe2_dev_pci_res_seg_map(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX,
+ irq_cnt, irq_base);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to map irq msix addr, ret=%d", ret);
+ goto l_free_irq_itr;
+ }
+ goto l_end;
+
+l_free_irq_itr:
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR);
+l_free_irq_dyn:
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN);
+l_free_rxq_tail:
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL);
+l_free_txq:
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX);
+l_free_seg1:
+ if (bar_info[1].seg_info) {
+ rte_free(bar_info[1].seg_info);
+ bar_info[1].seg_info = NULL;
+ }
+l_free_seg0:
+ if (bar_info[0].seg_info) {
+ rte_free(bar_info[0].seg_info);
+ bar_info[0].seg_info = NULL;
+ }
+l_free_bar:
+ if (bar_info) {
+ rte_free(bar_info);
+ bar_info = NULL;
+ }
+l_end:
+ return ret;
+}
+
+void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt;
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ uint8_t i = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL);
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX);
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_DYN);
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_ITR);
+ (void)sxe2_dev_pci_seg_unmap(adapter, SXE2_PCI_MAP_RES_IRQ_MSIX);
+
+ if (map_ctxt != NULL && map_ctxt->bar_info != NULL) {
+ for (i = 0; i < map_ctxt->bar_cnt; i++) {
+ bar_info = &map_ctxt->bar_info[i];
+ if (bar_info != NULL && bar_info->seg_info != NULL) {
+ rte_free(bar_info->seg_info);
+ bar_info->seg_info = NULL;
+ }
+ }
+ rte_free(map_ctxt->bar_info);
+ map_ctxt->bar_info = NULL;
+ }
+
+ adapter->dev_info.dev_data = NULL;
+}
+
static int32_t sxe2_dev_init(struct rte_eth_dev *dev,
struct sxe2_dev_kvargs_info *kvargs __rte_unused)
{
@@ -426,6 +736,12 @@ static int32_t sxe2_dev_init(struct rte_eth_dev *dev,
goto l_end;
}
+ ret = sxe2_dev_pci_map_init(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to pci addr map, ret=[%d]", ret);
+ goto l_end;
+ }
+
ret = sxe2_vsi_init(dev);
if (ret) {
PMD_LOG_ERR(INIT, "create main vsi failed, ret=%d", ret);
@@ -548,8 +864,10 @@ static int32_t sxe2_parse_eth_devargs(struct rte_device *dev,
memset(eth_da, 0, sizeof(*eth_da));
if (dev->devargs->cls_str) {
- ret = rte_eth_devargs_parse(dev->devargs->cls_str, eth_da, 1);
- if (ret != 0) {
+ ret = rte_eth_devargs_parse(dev->devargs->cls_str,
+ eth_da,
+ 1);
+ if (ret) {
PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s",
dev->devargs->cls_str);
return -rte_errno;
@@ -557,7 +875,9 @@ static int32_t sxe2_parse_eth_devargs(struct rte_device *dev,
}
if (eth_da->type == RTE_ETH_REPRESENTOR_NONE && dev->devargs->args) {
- ret = rte_eth_devargs_parse(dev->devargs->args, eth_da, 1);
+ ret = rte_eth_devargs_parse(dev->devargs->args,
+ eth_da,
+ 1);
if (ret) {
PMD_LOG_ERR(INIT, "Failed to parse device arguments: %s",
dev->devargs->args);
diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h
index c4634685e6..843e652616 100644
--- a/drivers/net/sxe2/sxe2_ethdev.h
+++ b/drivers/net/sxe2/sxe2_ethdev.h
@@ -290,4 +290,22 @@ struct sxe2_adapter {
#define SXE2_DEV_PRIVATE_TO_ADAPTER(dev) \
((struct sxe2_adapter *)(dev)->data->dev_private)
+#define SXE2_DEV_TO_PCI(eth_dev) \
+ RTE_DEV_TO_PCI((eth_dev)->device)
+
+struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type);
+
+int32_t sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type, uint64_t org_len, uint64_t org_offset);
+
+int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, uint32_t res_type,
+ uint32_t item_cnt, uint32_t item_base);
+
+void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, uint32_t res_type);
+
+int32_t sxe2_dev_pci_map_init(struct rte_eth_dev *dev);
+
+void sxe2_dev_pci_map_uinit(struct rte_eth_dev *dev);
+
#endif /* __SXE2_ETHDEV_H__ */
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 07/11] common/sxe2: add ioctl interface for DMA map and unmap
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (5 preceding siblings ...)
2026-05-16 7:46 ` [PATCH v15 06/11] drivers: support PCI BAR mapping liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 08/11] net/sxe2: support queue setup and control liujie5
` (3 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Implement DMA mapping and unmapping functionality using ioctl
calls. This allows the driver to configure the hardware's IOMMU/DMA
tables, ensuring the device can safely access memory buffers
allocated by the userspace.
The mapping is established during device initialization or queue
setup and is revoked during device closure to prevent memory
leaks and ensure hardware security.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/sxe2_common.c | 50 +++++++++-
drivers/common/sxe2/sxe2_ioctl_chnl.c | 104 +++++++++++++++++++++
drivers/common/sxe2/sxe2_ioctl_chnl_func.h | 9 ++
3 files changed, 162 insertions(+), 1 deletion(-)
diff --git a/drivers/common/sxe2/sxe2_common.c b/drivers/common/sxe2/sxe2_common.c
index c309a890ad..f394c832d7 100644
--- a/drivers/common/sxe2/sxe2_common.c
+++ b/drivers/common/sxe2/sxe2_common.c
@@ -442,7 +442,7 @@ static int32_t sxe2_common_pci_remove(struct rte_pci_device *pci_dev)
cdev = sxe2_rtedev_to_cdev(&pci_dev->device);
if (cdev == NULL) {
ret = -ENODEV;
- PMD_LOG_ERR(COM, "Fail to get remove device.");
+ PMD_LOG_ERR(COM, "Fail to get device when remove.");
goto l_end;
}
@@ -466,12 +466,60 @@ static int32_t sxe2_common_pci_remove(struct rte_pci_device *pci_dev)
return ret;
}
+static int32_t sxe2_common_pci_dma_map(struct rte_pci_device *pci_dev,
+ void *addr, uint64_t iova, size_t len)
+{
+ struct sxe2_common_device *cdev;
+ int32_t ret = -1;
+
+ cdev = sxe2_rtedev_to_cdev(&pci_dev->device);
+ if (cdev == NULL) {
+ ret = -ENODEV;
+ PMD_LOG_ERR(COM, "Fail to get device when dma map.");
+ goto l_end;
+ }
+
+ ret = sxe2_drv_dev_dma_map(cdev, (uint64_t)(uintptr_t)addr, iova, len);
+ if (ret) {
+ PMD_LOG_ERR(COM, "Fail to map dma map, ret=%d", ret);
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
+
+static int32_t sxe2_common_pci_dma_unmap(struct rte_pci_device *pci_dev,
+ void *addr __rte_unused, uint64_t iova, size_t len __rte_unused)
+{
+ struct sxe2_common_device *cdev;
+ int32_t ret = -1;
+
+ cdev = sxe2_rtedev_to_cdev(&pci_dev->device);
+ if (cdev == NULL) {
+ ret = -ENODEV;
+ PMD_LOG_ERR(COM, "Fail to get device when dma unmap.");
+ goto l_end;
+ }
+
+ ret = sxe2_drv_dev_dma_unmap(cdev, iova);
+ if (ret) {
+ PMD_LOG_ERR(COM, "Fail to unmap dma map, ret=%d", ret);
+ goto l_end;
+ }
+
+l_end:
+ return ret;
+}
+
static struct rte_pci_driver sxe2_common_pci_driver = {
.driver = {
.name = SXE2_COMMON_PCI_DRIVER_NAME,
},
.probe = sxe2_common_pci_probe,
.remove = sxe2_common_pci_remove,
+ .dma_map = sxe2_common_pci_dma_map,
+ .dma_unmap = sxe2_common_pci_dma_unmap,
};
static uint32_t sxe2_common_pci_id_table_size_get(const struct rte_pci_id *id_table)
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c
index 48acc41509..08df9373d7 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl.c
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c
@@ -219,3 +219,107 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len)
l_end:
return ret;
}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_map)
+int32_t
+sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, uint64_t vaddr,
+ uint64_t iova, uint64_t size)
+{
+ struct sxe2_ioctl_iommu_dma_map cmd_params;
+ enum rte_iova_mode iova_mode;
+ int32_t ret = 0;
+ int32_t cmd_fd = 0;
+
+ if (cdev->config.kernel_reset) {
+ ret = -EPERM;
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_end;
+ }
+
+ iova_mode = rte_eal_iova_mode();
+ if (iova_mode == RTE_IOVA_PA) {
+ if (cdev->config.support_iommu) {
+ PMD_LOG_ERR(COM, "iommu not support pa mode");
+ ret = -EIO;
+ }
+ goto l_end;
+ } else if (iova_mode == RTE_IOVA_VA) {
+ if (!cdev->config.support_iommu) {
+ PMD_LOG_ERR(COM, "no iommu not support va mode, please use pa mode.");
+ ret = -EIO;
+ goto l_end;
+ }
+ }
+
+ cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev);
+ if (cmd_fd < 0) {
+ ret = -EBADF;
+ PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd);
+ goto l_end;
+ }
+
+ memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_map));
+ cmd_params.vaddr = vaddr;
+ cmd_params.iova = iova;
+ cmd_params.size = size;
+
+ (void)pthread_mutex_lock(&cdev->config.lock);
+ ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_MAP, &cmd_params);
+ if (ret < 0) {
+ PMD_LOG_ERR(COM, "Failed to dma map, fd=%d, ret=%d, err:%s",
+ cmd_fd, ret, strerror(errno));
+ ret = -EIO;
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+ goto l_end;
+ }
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+
+l_end:
+ return ret;
+}
+
+RTE_EXPORT_INTERNAL_SYMBOL(sxe2_drv_dev_dma_unmap)
+int32_t
+sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, uint64_t iova)
+{
+ int32_t ret = 0;
+ int32_t cmd_fd = 0;
+ struct sxe2_ioctl_iommu_dma_unmap cmd_params;
+
+ if (cdev->config.kernel_reset) {
+ ret = -EPERM;
+ PMD_LOG_WARN(COM, "kernel reset, need restart app.");
+ goto l_end;
+ }
+
+ if (!cdev->config.support_iommu)
+ goto l_end;
+
+ cmd_fd = SXE2_CDEV_TO_CMD_FD(cdev);
+ if (cmd_fd < 0) {
+ ret = -EBADF;
+ PMD_LOG_ERR(COM, "Failed to exec cmd, fd=%d", cmd_fd);
+ goto l_end;
+ }
+
+ PMD_LOG_DEBUG(COM, "fd %d dma unmap iova=0x%"PRIX64"",
+ cmd_fd, iova);
+
+ memset(&cmd_params, 0, sizeof(struct sxe2_ioctl_iommu_dma_unmap));
+ cmd_params.iova = iova;
+
+ (void)pthread_mutex_lock(&cdev->config.lock);
+ ret = ioctl(cmd_fd, SXE2_COM_CMD_DMA_UNMAP, &cmd_params);
+ if (ret < 0) {
+ PMD_LOG_INFO(COM, "Failed to dma unmap, fd=%d, ret=%d, err:%s",
+ cmd_fd, ret, strerror(errno));
+ ret = -EIO;
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+ goto l_end;
+ }
+ (void)pthread_mutex_unlock(&cdev->config.lock);
+
+l_end:
+ return ret;
+}
+
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
index 710ca1a8d0..ed9ceb1f5a 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl_func.h
@@ -46,6 +46,15 @@ __rte_internal
int32_t
sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len);
+__rte_internal
+int32_t
+sxe2_drv_dev_dma_map(struct sxe2_common_device *cdev, uint64_t vaddr,
+ uint64_t iova, uint64_t size);
+
+__rte_internal
+int32_t
+sxe2_drv_dev_dma_unmap(struct sxe2_common_device *cdev, uint64_t iova);
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 08/11] net/sxe2: support queue setup and control
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (6 preceding siblings ...)
2026-05-16 7:46 ` [PATCH v15 07/11] common/sxe2: add ioctl interface for DMA map and unmap liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 09/11] drivers: add data path for Rx and Tx liujie5
` (2 subsequent siblings)
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Add support for Rx and Tx queue setup, release, and management.
Implement eth_dev_ops callbacks for rx_queue_setup, tx_queue_setup,
rx_queue_release, and tx_queue_release.
This includes:
- Allocating memory for hardware ring descriptors.
- Initializing software ring structures and hardware head/tail pointers.
- Implementing proper resource cleanup logic to prevent memory leaks
during queue reconfiguration or device close.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/net/sxe2/meson.build | 2 +
drivers/net/sxe2/sxe2_ethdev.c | 82 +++--
drivers/net/sxe2/sxe2_ethdev.h | 15 +-
drivers/net/sxe2/sxe2_rx.c | 559 +++++++++++++++++++++++++++++++++
drivers/net/sxe2/sxe2_rx.h | 34 ++
drivers/net/sxe2/sxe2_tx.c | 420 +++++++++++++++++++++++++
drivers/net/sxe2/sxe2_tx.h | 32 ++
7 files changed, 1118 insertions(+), 26 deletions(-)
create mode 100644 drivers/net/sxe2/sxe2_rx.c
create mode 100644 drivers/net/sxe2/sxe2_rx.h
create mode 100644 drivers/net/sxe2/sxe2_tx.c
create mode 100644 drivers/net/sxe2/sxe2_tx.h
diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build
index 00c38b147c..3dfe54903a 100644
--- a/drivers/net/sxe2/meson.build
+++ b/drivers/net/sxe2/meson.build
@@ -18,6 +18,8 @@ sources += files(
'sxe2_cmd_chnl.c',
'sxe2_vsi.c',
'sxe2_queue.c',
+ 'sxe2_tx.c',
+ 'sxe2_rx.c',
)
allow_internal_get_api = true
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
index 204add9c98..6abb4672f6 100644
--- a/drivers/net/sxe2/sxe2_ethdev.c
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -24,6 +24,8 @@
#include "sxe2_ethdev.h"
#include "sxe2_drv_cmd.h"
#include "sxe2_cmd_chnl.h"
+#include "sxe2_tx.h"
+#include "sxe2_rx.h"
#include "sxe2_common.h"
#include "sxe2_common_log.h"
#include "sxe2_host_regs.h"
@@ -86,14 +88,6 @@ static int32_t sxe2_dev_configure(struct rte_eth_dev *dev)
return ret;
}
-static void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev __rte_unused)
-{
-}
-
-static void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev __rte_unused)
-{
-}
-
static int32_t sxe2_dev_stop(struct rte_eth_dev *dev)
{
int32_t ret = 0;
@@ -112,16 +106,6 @@ static int32_t sxe2_dev_stop(struct rte_eth_dev *dev)
return ret;
}
-static int32_t __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev __rte_unused)
-{
- return 0;
-}
-
-static int32_t __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev __rte_unused)
-{
- return 0;
-}
-
static int32_t sxe2_queues_start(struct rte_eth_dev *dev)
{
int32_t ret = 0;
@@ -307,10 +291,18 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = {
.dev_stop = sxe2_dev_stop,
.dev_close = sxe2_dev_close,
.dev_infos_get = sxe2_dev_infos_get,
+
+ .rx_queue_setup = sxe2_rx_queue_setup,
+ .tx_queue_setup = sxe2_tx_queue_setup,
+ .rx_queue_release = sxe2_rx_queue_release,
+ .tx_queue_release = sxe2_tx_queue_release,
+
+ .rxq_info_get = sxe2_rx_queue_info_get,
+ .txq_info_get = sxe2_tx_queue_info_get,
};
struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
- enum sxe2_pci_map_resource res_type)
+ enum sxe2_pci_map_resource res_type)
{
struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt;
struct sxe2_pci_map_bar_info *bar_info = NULL;
@@ -334,6 +326,48 @@ struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter
return bar_info;
}
+void *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type,
+ uint16_t idx_in_func)
+{
+ struct sxe2_pci_map_context *map_ctxt = &adapter->map_ctxt;
+ struct sxe2_pci_map_segment_info *seg_info = NULL;
+ struct sxe2_pci_map_bar_info *bar_info = NULL;
+ void *addr = NULL;
+ uintptr_t calc_addr = 0;
+ uint8_t reg_width = 0;
+ uint8_t i = 0;
+
+ bar_info = sxe2_dev_get_bar_info(adapter, res_type);
+ if (bar_info == NULL) {
+ PMD_DEV_LOG_WARN(adapter, INIT, "Failed to get bar info, res_type=[%d]",
+ res_type);
+ goto l_end;
+ }
+ seg_info = bar_info->seg_info;
+
+ reg_width = map_ctxt->addr_info[res_type].reg_width;
+ if (reg_width == 0) {
+ PMD_DEV_LOG_WARN(adapter, INIT, "Invalid reg width with resource type %d",
+ res_type);
+ goto l_end;
+ }
+
+ for (i = 0; i < bar_info->map_cnt; i++) {
+ seg_info = &bar_info->seg_info[i];
+ if (res_type == seg_info->type) {
+ calc_addr = (uintptr_t)seg_info->addr;
+ calc_addr += (uintptr_t)seg_info->page_inner_offset;
+ calc_addr += (uintptr_t)reg_width * (uintptr_t)idx_in_func;
+ addr = (void *)calc_addr;
+ goto l_end;
+ }
+ }
+
+l_end:
+ return addr;
+}
+
static void sxe2_drv_dev_caps_set(struct sxe2_adapter *adapter,
struct sxe2_drv_dev_caps_resp *dev_caps)
{
@@ -402,7 +436,9 @@ static int32_t sxe2_dev_caps_get(struct sxe2_adapter *adapter)
}
int32_t sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter,
- enum sxe2_pci_map_resource res_type, uint64_t org_len, uint64_t org_offset)
+ enum sxe2_pci_map_resource res_type,
+ uint64_t org_len,
+ uint64_t org_offset)
{
struct sxe2_pci_map_bar_info *bar_info = NULL;
struct sxe2_pci_map_segment_info *seg_info = NULL;
@@ -478,8 +514,10 @@ static int32_t sxe2_hw_init(struct rte_eth_dev *dev)
return ret;
}
-int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, uint32_t res_type,
- uint32_t item_cnt, uint32_t item_base)
+int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter,
+ uint32_t res_type,
+ uint32_t item_cnt,
+ uint32_t item_base)
{
struct sxe2_pci_map_addr_info *addr_info = NULL;
int32_t ret = 0;
diff --git a/drivers/net/sxe2/sxe2_ethdev.h b/drivers/net/sxe2/sxe2_ethdev.h
index 843e652616..001413e75a 100644
--- a/drivers/net/sxe2/sxe2_ethdev.h
+++ b/drivers/net/sxe2/sxe2_ethdev.h
@@ -293,14 +293,21 @@ struct sxe2_adapter {
#define SXE2_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
+void *sxe2_pci_map_addr_get(struct sxe2_adapter *adapter,
+ enum sxe2_pci_map_resource res_type,
+ uint16_t idx_in_func);
+
struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
- enum sxe2_pci_map_resource res_type);
+ enum sxe2_pci_map_resource res_type);
int32_t sxe2_dev_pci_seg_map(struct sxe2_adapter *adapter,
- enum sxe2_pci_map_resource res_type, uint64_t org_len, uint64_t org_offset);
+ enum sxe2_pci_map_resource res_type,
+ uint64_t org_len, uint64_t org_offset);
-int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter, uint32_t res_type,
- uint32_t item_cnt, uint32_t item_base);
+int32_t sxe2_dev_pci_res_seg_map(struct sxe2_adapter *adapter,
+ uint32_t res_type,
+ uint32_t item_cnt,
+ uint32_t item_base);
void sxe2_dev_pci_seg_unmap(struct sxe2_adapter *adapter, uint32_t res_type);
diff --git a/drivers/net/sxe2/sxe2_rx.c b/drivers/net/sxe2/sxe2_rx.c
new file mode 100644
index 0000000000..a04c1808a6
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_rx.c
@@ -0,0 +1,559 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+
+#include "sxe2_ethdev.h"
+#include "sxe2_queue.h"
+#include "sxe2_rx.h"
+#include "sxe2_cmd_chnl.h"
+
+#include "sxe2_osal.h"
+#include "sxe2_common_log.h"
+
+static void *sxe2_rx_doorbell_tail_addr_get(struct sxe2_adapter *adapter, uint16_t queue_id)
+{
+ return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_RX_TAIL,
+ queue_id);
+}
+
+static void sxe2_rx_head_tail_init(struct sxe2_adapter *adapter, struct sxe2_rx_queue *rxq)
+{
+ rxq->rdt_reg_addr = sxe2_rx_doorbell_tail_addr_get(adapter, rxq->queue_id);
+ SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, 0);
+}
+
+static void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq)
+{
+ uint16_t i = 0;
+ uint16_t len = 0;
+ static const union sxe2_rx_desc zeroed_desc = {{0}};
+
+ len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM;
+ for (i = 0; i < len; ++i)
+ rxq->desc_ring[i] = zeroed_desc;
+
+ memset(&rxq->fake_mbuf, 0, sizeof(rxq->fake_mbuf));
+ for (i = rxq->ring_depth; i < len; i++)
+ rxq->buffer_ring[i] = &rxq->fake_mbuf;
+
+ rxq->hold_num = 0;
+ rxq->next_ret_pkt = 0;
+ rxq->processing_idx = 0;
+ rxq->completed_pkts_num = 0;
+ rxq->batch_alloc_trigger = rxq->rx_free_thresh - 1;
+
+ rxq->pkt_first_seg = NULL;
+ rxq->pkt_last_seg = NULL;
+
+ rxq->realloc_num = 0;
+ rxq->realloc_start = 0;
+}
+
+void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq)
+{
+ uint16_t i;
+
+ if (rxq->buffer_ring != NULL) {
+ for (i = 0; i < rxq->ring_depth; i++) {
+ if (rxq->buffer_ring[i] != NULL) {
+ rte_pktmbuf_free(rxq->buffer_ring[i]);
+ rxq->buffer_ring[i] = NULL;
+ }
+ }
+ }
+
+ if (rxq->completed_pkts_num) {
+ for (i = 0; i < rxq->completed_pkts_num; ++i) {
+ if (rxq->completed_buf[rxq->next_ret_pkt + i] != NULL) {
+ rte_pktmbuf_free(rxq->completed_buf[rxq->next_ret_pkt + i]);
+ rxq->completed_buf[rxq->next_ret_pkt + i] = NULL;
+ }
+ }
+ rxq->completed_pkts_num = 0;
+ }
+}
+
+const struct sxe2_rxq_ops sxe2_default_rxq_ops = {
+ .queue_reset = sxe2_rx_queue_reset,
+ .mbufs_release = sxe2_rx_queue_mbufs_release,
+};
+
+static struct sxe2_rxq_ops sxe2_rx_default_ops_get(void)
+{
+ return sxe2_default_rxq_ops;
+}
+
+void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev,
+ uint16_t queue_id, struct rte_eth_rxq_info *qinfo)
+{
+ struct sxe2_rx_queue *rxq = NULL;
+
+ if (queue_id >= dev->data->nb_rx_queues) {
+ PMD_LOG_ERR(RX, "rx queue:%u is out of range:%u",
+ queue_id, dev->data->nb_rx_queues);
+ goto end;
+ }
+
+ rxq = dev->data->rx_queues[queue_id];
+ if (rxq == NULL) {
+ PMD_LOG_ERR(RX, "rx queue:%u is NULL", queue_id);
+ goto end;
+ }
+
+ qinfo->mp = rxq->mb_pool;
+ qinfo->nb_desc = rxq->ring_depth;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_drop_en = rxq->drop_en;
+ qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+
+end:
+ return;
+}
+
+int32_t __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_rx_queue *rxq;
+ int32_t ret;
+ PMD_INIT_FUNC_TRACE();
+
+ if (dev->data->rx_queue_state[rx_queue_id] ==
+ RTE_ETH_QUEUE_STATE_STOPPED) {
+ ret = 0;
+ goto l_end;
+ }
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+ if (rxq == NULL) {
+ ret = 0;
+ goto l_end;
+ }
+ ret = sxe2_drv_rxq_switch(adapter, rxq, false);
+ if (ret) {
+ PMD_LOG_ERR(RX, "Failed to switch rx queue %u off, ret = %d",
+ rx_queue_id, ret);
+ if (ret == -EPERM)
+ goto l_free;
+ goto l_end;
+ }
+
+l_free:
+ rxq->ops.mbufs_release(rxq);
+ rxq->ops.queue_reset(rxq);
+ dev->data->rx_queue_state[rx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STOPPED;
+l_end:
+ return ret;
+}
+
+static void __rte_cold sxe2_rx_queue_free(struct sxe2_rx_queue *rxq)
+{
+ if (rxq != NULL) {
+ rxq->ops.mbufs_release(rxq);
+ if (rxq->buffer_ring != NULL) {
+ rte_free(rxq->buffer_ring);
+ rxq->buffer_ring = NULL;
+ }
+ rte_memzone_free(rxq->mz);
+ rte_free(rxq);
+ }
+}
+
+void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev,
+ uint16_t queue_idx)
+{
+ (void)sxe2_rx_queue_stop(dev, queue_idx);
+ sxe2_rx_queue_free(dev->data->rx_queues[queue_idx]);
+ dev->data->rx_queues[queue_idx] = NULL;
+}
+
+void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ uint16_t nb_rxq;
+
+ for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+ if (data->rx_queues[nb_rxq] == NULL)
+ continue;
+ sxe2_rx_queue_release(dev, nb_rxq);
+ data->rx_queues[nb_rxq] = NULL;
+ }
+ data->nb_rx_queues = 0;
+}
+
+static struct sxe2_rx_queue *sxe2_rx_queue_alloc(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t ring_depth, uint32_t socket_id)
+{
+ struct sxe2_rx_queue *rxq;
+ const struct rte_memzone *tz;
+ uint16_t len;
+
+ if (dev->data->rx_queues[queue_idx] != NULL) {
+ sxe2_rx_queue_release(dev, queue_idx);
+ dev->data->rx_queues[queue_idx] = NULL;
+ }
+
+ rxq = rte_zmalloc_socket("rx_queue", sizeof(*rxq),
+ RTE_CACHE_LINE_SIZE, socket_id);
+
+ if (rxq == NULL) {
+ PMD_LOG_ERR(RX, "rx queue[%d] alloc failed", queue_idx);
+ goto l_end;
+ }
+
+ rxq->ring_depth = ring_depth;
+ len = rxq->ring_depth + SXE2_RX_PKTS_BURST_BATCH_NUM;
+
+ rxq->buffer_ring = rte_zmalloc_socket("rx_buffer_ring",
+ sizeof(struct rte_mbuf *) * len,
+ RTE_CACHE_LINE_SIZE, socket_id);
+
+ if (!rxq->buffer_ring) {
+ PMD_LOG_ERR(RX, "Rxq malloc mbuf mem failed");
+ rte_free(rxq);
+ rxq = NULL;
+ goto l_end;
+ }
+
+ tz = rte_eth_dma_zone_reserve(dev, "rx_dma", queue_idx,
+ SXE2_RX_RING_SIZE, SXE2_DESC_ADDR_ALIGN, socket_id);
+ if (tz == NULL) {
+ PMD_LOG_ERR(RX, "Rxq malloc desc mem failed");
+ rte_free(rxq->buffer_ring);
+ rxq->buffer_ring = NULL;
+ rte_free(rxq);
+ rxq = NULL;
+ goto l_end;
+ }
+
+ rxq->mz = tz;
+ memset(tz->addr, 0, SXE2_RX_RING_SIZE);
+ rxq->base_addr = tz->iova;
+ rxq->desc_ring = (union sxe2_rx_desc *)tz->addr;
+
+l_end:
+ return rxq;
+}
+
+int32_t __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t nb_desc, uint32_t socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi;
+ struct sxe2_rx_queue *rxq;
+ uint64_t offloads;
+ int32_t ret;
+ uint16_t rx_nseg;
+ uint16_t i;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (nb_desc % SXE2_RX_DESC_RING_ALIGN != 0 ||
+ nb_desc > SXE2_MAX_RING_DESC ||
+ nb_desc < SXE2_MIN_RING_DESC) {
+ PMD_LOG_ERR(RX, "param desc num:%u is invalid", nb_desc);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if (mp != NULL)
+ rx_nseg = 1;
+ else
+ rx_nseg = rx_conf->rx_nseg;
+
+ offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+ if (rx_nseg > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
+ PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload not configured, but rx_nseg is %u",
+ dev->data->port_id, queue_idx, rx_nseg);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if ((offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && !(rx_nseg > 1)) {
+ PMD_LOG_ERR(RX, "Port %u queue %u Buffer split offload configured, but rx_nseg is %u",
+ dev->data->port_id, queue_idx, rx_nseg);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if ((offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
+ (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
+ PMD_LOG_ERR(RX, "port_id %u queue %u, LRO can't be configure with Keep crc.",
+ dev->data->port_id, queue_idx);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ rxq = sxe2_rx_queue_alloc(dev, queue_idx, nb_desc, socket_id);
+ if (rxq == NULL) {
+ PMD_LOG_ERR(RX, "rx queue[%d] resource alloc failed", queue_idx);
+ ret = -ENOMEM;
+ goto l_end;
+ }
+
+ if (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
+ dev->data->lro = 1;
+
+ if (rx_nseg > 1) {
+ for (i = 0; i < rx_nseg; i++) {
+ rte_memcpy(&rxq->rx_seg[i], &rx_conf->rx_seg[i].split,
+ sizeof(struct rte_eth_rxseg_split));
+ }
+ rxq->mb_pool = rxq->rx_seg[0].mp;
+ } else {
+ rxq->mb_pool = mp;
+ }
+
+ rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+ rxq->port_id = dev->data->port_id;
+ rxq->offloads = offloads;
+ if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
+
+ rxq->queue_id = queue_idx;
+ rxq->idx_in_func = vsi->rxqs.base_idx_in_func + queue_idx;
+ rxq->drop_en = rx_conf->rx_drop_en;
+ rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+ rxq->vsi = vsi;
+ rxq->ops = sxe2_rx_default_ops_get();
+ rxq->ops.queue_reset(rxq);
+ dev->data->rx_queues[queue_idx] = rxq;
+
+ ret = 0;
+l_end:
+ return ret;
+}
+
+struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp)
+{
+ return rte_mbuf_raw_alloc(mp);
+}
+
+static int32_t __rte_cold sxe2_rx_queue_mbufs_alloc(struct sxe2_rx_queue *rxq)
+{
+ struct rte_mbuf **buf_ring = rxq->buffer_ring;
+ struct rte_mbuf *mbuf = NULL;
+ struct rte_mbuf *mbuf_pay;
+ volatile union sxe2_rx_desc *desc;
+ uint64_t dma_addr;
+ int32_t ret;
+ uint16_t i, j;
+
+ for (i = 0; i < rxq->ring_depth; i++) {
+ mbuf = sxe2_mbuf_raw_alloc(rxq->mb_pool);
+ if (mbuf == NULL) {
+ PMD_LOG_ERR(RX, "Rx queue is not available or setup");
+ ret = -ENOMEM;
+ goto l_err_free_mbuf;
+ }
+
+ buf_ring[i] = mbuf;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->nb_segs = 1;
+ mbuf->port = rxq->port_id;
+
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+ desc = &rxq->desc_ring[i];
+ if (!(rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
+ mbuf->next = NULL;
+ desc->read.hdr_addr = 0;
+ desc->read.pkt_addr = dma_addr;
+ } else {
+ mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp);
+ if (unlikely(!mbuf_pay)) {
+ PMD_LOG_ERR(RX, "Failed to allocate payload mbuf for RX");
+ ret = -ENOMEM;
+ goto l_err_free_mbuf;
+ }
+
+ mbuf_pay->next = NULL;
+ mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf_pay->nb_segs = 1;
+ mbuf_pay->port = rxq->port_id;
+ mbuf->next = mbuf_pay;
+
+ desc->read.hdr_addr = dma_addr;
+ desc->read.pkt_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay));
+ }
+
+#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC
+ desc->read.rsvd1 = 0;
+ desc->read.rsvd2 = 0;
+#endif
+ }
+
+ ret = 0;
+ goto l_end;
+
+l_err_free_mbuf:
+ for (j = 0; j <= i; j++) {
+ if (buf_ring[j] != NULL && buf_ring[j]->next != NULL) {
+ rte_pktmbuf_free(buf_ring[j]->next);
+ buf_ring[j]->next = NULL;
+ }
+
+ if (buf_ring[j] != NULL) {
+ rte_pktmbuf_free(buf_ring[j]);
+ buf_ring[j] = NULL;
+ }
+ }
+
+l_end:
+ return ret;
+}
+
+int32_t __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct sxe2_rx_queue *rxq;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ int32_t ret;
+ PMD_INIT_FUNC_TRACE();
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+ if (rxq == NULL) {
+ PMD_LOG_ERR(RX, "Rx queue %u is not available or setup",
+ rx_queue_id);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if (dev->data->rx_queue_state[rx_queue_id] ==
+ RTE_ETH_QUEUE_STATE_STARTED) {
+ ret = 0;
+ goto l_end;
+ }
+
+ ret = sxe2_rx_queue_mbufs_alloc(rxq);
+ if (ret) {
+ PMD_LOG_ERR(RX, "Rx queue %u apply desc ring fail",
+ rx_queue_id);
+ ret = -ENOMEM;
+ goto l_end;
+ }
+
+ sxe2_rx_head_tail_init(adapter, rxq);
+
+ ret = sxe2_drv_rxq_ctxt_cfg(adapter, rxq, 1);
+ if (ret) {
+ PMD_LOG_ERR(RX, "Rx queue %u config ctxt fail, ret=%d",
+ rx_queue_id, ret);
+
+ (void)sxe2_drv_rxq_switch(adapter, rxq, false);
+ rxq->ops.mbufs_release(rxq);
+ rxq->ops.queue_reset(rxq);
+ goto l_end;
+ }
+
+ SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rxq->ring_depth - 1);
+ dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+l_end:
+ return ret;
+}
+
+int32_t __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct sxe2_rx_queue *rxq;
+ uint16_t nb_rxq;
+ uint16_t nb_started_rxq;
+ int32_t ret;
+ PMD_INIT_FUNC_TRACE();
+
+ for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+ rxq = dev->data->rx_queues[nb_rxq];
+ if (!rxq || rxq->rx_deferred_start)
+ continue;
+
+ ret = sxe2_rx_queue_start(dev, nb_rxq);
+ if (ret) {
+ PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq);
+ goto l_free_started_queue;
+ }
+
+ rte_atomic_store_explicit(&rxq->sw_stats.pkts, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.bytes, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.drop_pkts, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.drop_bytes, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.unicast_pkts, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.broadcast_pkts, 0,
+ rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&rxq->sw_stats.multicast_pkts, 0,
+ rte_memory_order_relaxed);
+ }
+ ret = 0;
+ goto l_end;
+
+l_free_started_queue:
+ for (nb_started_rxq = 0; nb_started_rxq <= nb_rxq; nb_started_rxq++)
+ (void)sxe2_rx_queue_stop(dev, nb_started_rxq);
+l_end:
+ return ret;
+}
+
+void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi;
+ struct sxe2_stats *sw_stats_prev = &vsi->vsi_stats.vsi_sw_stats_prev;
+ struct sxe2_rx_queue *rxq = NULL;
+ int32_t ret;
+ uint16_t nb_rxq;
+ PMD_INIT_FUNC_TRACE();
+
+ for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) {
+ ret = sxe2_rx_queue_stop(dev, nb_rxq);
+ if (ret) {
+ PMD_LOG_ERR(RX, "Fail to start rx queue %u", nb_rxq);
+ continue;
+ }
+
+ rxq = dev->data->rx_queues[nb_rxq];
+ if (rxq) {
+ sw_stats_prev->ipackets +=
+ rte_atomic_load_explicit(&rxq->sw_stats.pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->ierrors +=
+ rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->ibytes +=
+ rte_atomic_load_explicit(&rxq->sw_stats.bytes,
+ rte_memory_order_relaxed);
+
+ sw_stats_prev->rx_sw_unicast_packets +=
+ rte_atomic_load_explicit(&rxq->sw_stats.unicast_pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->rx_sw_broadcast_packets +=
+ rte_atomic_load_explicit(&rxq->sw_stats.broadcast_pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->rx_sw_multicast_packets +=
+ rte_atomic_load_explicit(&rxq->sw_stats.multicast_pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->rx_sw_drop_packets +=
+ rte_atomic_load_explicit(&rxq->sw_stats.drop_pkts,
+ rte_memory_order_relaxed);
+ sw_stats_prev->rx_sw_drop_bytes +=
+ rte_atomic_load_explicit(&rxq->sw_stats.drop_bytes,
+ rte_memory_order_relaxed);
+ }
+ }
+}
diff --git a/drivers/net/sxe2/sxe2_rx.h b/drivers/net/sxe2/sxe2_rx.h
new file mode 100644
index 0000000000..138a9d56c9
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_rx.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_RX_H__
+#define __SXE2_RX_H__
+
+#include "sxe2_queue.h"
+
+int32_t __rte_cold sxe2_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t nb_desc, uint32_t socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp);
+
+int32_t __rte_cold sxe2_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+void __rte_cold sxe2_rx_queue_mbufs_release(struct sxe2_rx_queue *rxq);
+
+void __rte_cold sxe2_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
+
+void __rte_cold sxe2_all_rxqs_release(struct rte_eth_dev *dev);
+
+void __rte_cold sxe2_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+int32_t __rte_cold sxe2_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+int32_t __rte_cold sxe2_rxqs_all_start(struct rte_eth_dev *dev);
+
+void __rte_cold sxe2_rxqs_all_stop(struct rte_eth_dev *dev);
+
+struct rte_mbuf *sxe2_mbuf_raw_alloc(struct rte_mempool *mp);
+
+#endif /* __SXE2_RX_H__ */
diff --git a/drivers/net/sxe2/sxe2_tx.c b/drivers/net/sxe2/sxe2_tx.c
new file mode 100644
index 0000000000..a05beb8c7a
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_tx.c
@@ -0,0 +1,420 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <ethdev_driver.h>
+#include "sxe2_tx.h"
+#include "sxe2_ethdev.h"
+#include "sxe2_common_log.h"
+#include "sxe2_cmd_chnl.h"
+
+static void *sxe2_tx_doorbell_addr_get(struct sxe2_adapter *adapter, uint16_t queue_id)
+{
+ return sxe2_pci_map_addr_get(adapter, SXE2_PCI_MAP_RES_DOORBELL_TX,
+ queue_id);
+}
+
+static void sxe2_tx_tail_init(struct sxe2_adapter *adapter, struct sxe2_tx_queue *txq)
+{
+ txq->tdt_reg_addr = sxe2_tx_doorbell_addr_get(adapter, txq->queue_id);
+ SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, 0);
+}
+
+void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq)
+{
+ uint16_t prev, i;
+ volatile union sxe2_tx_data_desc *txd;
+ static const union sxe2_tx_data_desc zeroed_desc = {{0}};
+ struct sxe2_tx_buffer *tx_buffer = txq->buffer_ring;
+
+ for (i = 0; i < txq->ring_depth; i++)
+ txq->desc_ring[i] = zeroed_desc;
+
+ prev = txq->ring_depth - 1;
+ for (i = 0; i < txq->ring_depth; i++) {
+ txd = &txq->desc_ring[i];
+ if (txd == NULL)
+ continue;
+
+ txd->wb.dd = rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE);
+ tx_buffer[i].mbuf = NULL;
+ tx_buffer[i].last_id = i;
+ tx_buffer[prev].next_id = i;
+ prev = i;
+ }
+
+ txq->desc_used_num = 0;
+ txq->desc_free_num = txq->ring_depth - 1;
+ txq->next_use = 0;
+ txq->next_clean = txq->ring_depth - 1;
+ txq->next_dd = txq->rs_thresh - 1;
+ txq->next_rs = txq->rs_thresh - 1;
+}
+
+void __rte_cold sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq)
+{
+ uint32_t i;
+
+ if (txq != NULL && txq->buffer_ring != NULL) {
+ for (i = 0; i < txq->ring_depth; i++) {
+ if (txq->buffer_ring[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(txq->buffer_ring[i].mbuf);
+ txq->buffer_ring[i].mbuf = NULL;
+ }
+ }
+ }
+}
+
+static void sxe2_tx_buffer_ring_free(struct sxe2_tx_queue *txq)
+{
+ if (txq != NULL && txq->buffer_ring != NULL)
+ rte_free(txq->buffer_ring);
+}
+
+const struct sxe2_txq_ops sxe2_default_txq_ops = {
+ .queue_reset = sxe2_tx_queue_reset,
+ .mbufs_release = sxe2_tx_queue_mbufs_release,
+ .buffer_ring_free = sxe2_tx_buffer_ring_free,
+};
+
+static struct sxe2_txq_ops sxe2_tx_default_ops_get(void)
+{
+ return sxe2_default_txq_ops;
+}
+
+static int32_t sxe2_txq_arg_validate(struct rte_eth_dev *dev, uint16_t ring_depth,
+ uint16_t *rs_thresh, uint16_t *free_thresh, const struct rte_eth_txconf *tx_conf)
+{
+ int32_t ret = 0;
+
+ if ((ring_depth % SXE2_TX_DESC_RING_ALIGN) != 0 ||
+ ring_depth > SXE2_MAX_RING_DESC ||
+ ring_depth < SXE2_MIN_RING_DESC) {
+ PMD_LOG_ERR(TX, "number:%u of receive descriptors is invalid", ring_depth);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ *free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+ tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+ *rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+ tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH);
+
+ if (*rs_thresh >= (ring_depth - 2)) {
+ PMD_LOG_ERR(TX, "tx_rs_thresh must be less than the number "
+ "of tx descriptors minus 2. (tx_rs_thresh:%u port:%u)",
+ *rs_thresh, dev->data->port_id);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if (*free_thresh >= (ring_depth - 3)) {
+ PMD_LOG_ERR(TX, "tx_free_thresh must be less than the number "
+ "of tx descriptors minus 3. (tx_free_thresh:%u port:%u)",
+ *free_thresh, dev->data->port_id);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if (*rs_thresh > *free_thresh) {
+ PMD_LOG_ERR(TX, "tx_rs_thresh must be less than or equal to "
+ "tx_free_thresh. (tx_free_thresh:%u tx_rs_thresh:%u port:%u)",
+ *free_thresh, *rs_thresh, dev->data->port_id);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if ((ring_depth % *rs_thresh) != 0) {
+ PMD_LOG_ERR(TX, "tx_rs_thresh must be a divisor of the "
+ "number of tx descriptors. (tx_rs_thresh:%u port:%d ring_depth:%u)",
+ *rs_thresh, dev->data->port_id, ring_depth);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ ret = 0;
+
+l_end:
+ return ret;
+}
+
+void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct sxe2_tx_queue *txq = NULL;
+
+ txq = dev->data->tx_queues[queue_id];
+ if (txq == NULL) {
+ PMD_LOG_WARN(TX, "tx queue:%u is NULL", queue_id);
+ goto end;
+ }
+
+ qinfo->nb_desc = txq->ring_depth;
+
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+ qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+ qinfo->conf.tx_free_thresh = txq->free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->rs_thresh;
+ qinfo->conf.offloads = txq->offloads;
+ qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+
+end:
+ return;
+}
+
+int32_t __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_tx_queue *txq;
+ int32_t ret;
+ PMD_INIT_FUNC_TRACE();
+
+ if (dev->data->tx_queue_state[queue_id] ==
+ RTE_ETH_QUEUE_STATE_STOPPED) {
+ ret = 0;
+ goto l_end;
+ }
+
+ txq = dev->data->tx_queues[queue_id];
+ if (txq == NULL) {
+ ret = 0;
+ goto l_end;
+ }
+
+ ret = sxe2_drv_txq_switch(adapter, txq, false);
+ if (ret) {
+ PMD_LOG_ERR(TX, "Failed to switch tx queue %u off",
+ queue_id);
+ goto l_end;
+ }
+
+ txq->ops.mbufs_release(txq);
+ txq->ops.queue_reset(txq);
+ dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+ ret = 0;
+
+l_end:
+ return ret;
+}
+
+static void __rte_cold sxe2_tx_queue_free(struct sxe2_tx_queue *txq)
+{
+ if (txq != NULL) {
+ txq->ops.mbufs_release(txq);
+ txq->ops.buffer_ring_free(txq);
+
+ rte_memzone_free(txq->mz);
+ rte_free(txq);
+ }
+}
+
+void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
+{
+ (void)sxe2_tx_queue_stop(dev, queue_idx);
+ sxe2_tx_queue_free(dev->data->tx_queues[queue_idx]);
+ dev->data->tx_queues[queue_idx] = NULL;
+}
+
+void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ uint16_t nb_txq;
+
+ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+ if (data->tx_queues[nb_txq] == NULL)
+ continue;
+
+ sxe2_tx_queue_release(dev, nb_txq);
+ data->tx_queues[nb_txq] = NULL;
+ }
+ data->nb_tx_queues = 0;
+}
+
+static struct sxe2_tx_queue
+*sxe2_tx_queue_alloc(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t ring_depth, uint32_t socket_id)
+{
+ struct sxe2_tx_queue *txq;
+ const struct rte_memzone *tz;
+
+ if (dev->data->tx_queues[queue_idx]) {
+ sxe2_tx_queue_release(dev, queue_idx);
+ dev->data->tx_queues[queue_idx] = NULL;
+ }
+
+ txq = rte_zmalloc_socket("tx_queue", sizeof(struct sxe2_tx_queue),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq == NULL) {
+ PMD_LOG_ERR(TX, "tx queue:%d alloc failed", queue_idx);
+ goto l_end;
+ }
+
+ tz = rte_eth_dma_zone_reserve(dev, "tx_dma", queue_idx,
+ sizeof(union sxe2_tx_data_desc) * SXE2_MAX_RING_DESC,
+ SXE2_DESC_ADDR_ALIGN, socket_id);
+ if (tz == NULL) {
+ PMD_LOG_ERR(TX, "tx desc ring alloc failed, queue_id:%d", queue_idx);
+ rte_free(txq);
+ txq = NULL;
+ goto l_end;
+ }
+
+ txq->buffer_ring = rte_zmalloc_socket("tx_buffer_ring",
+ sizeof(struct sxe2_tx_buffer) * ring_depth,
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq->buffer_ring == NULL) {
+ PMD_LOG_ERR(TX, "tx buffer alloc failed, queue_id:%d", queue_idx);
+ rte_memzone_free(tz);
+ rte_free(txq);
+ txq = NULL;
+ goto l_end;
+ }
+
+ txq->mz = tz;
+ txq->base_addr = tz->iova;
+ txq->desc_ring = (volatile union sxe2_tx_data_desc *)tz->addr;
+
+l_end:
+ return txq;
+}
+
+int32_t __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t nb_desc, uint32_t socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ int32_t ret = 0;
+ uint16_t tx_rs_thresh;
+ uint16_t tx_free_thresh;
+ struct sxe2_tx_queue *txq;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ struct sxe2_vsi *vsi = adapter->vsi_ctxt.main_vsi;
+ uint64_t offloads;
+ PMD_INIT_FUNC_TRACE();
+
+ ret = sxe2_txq_arg_validate(dev, nb_desc, &tx_rs_thresh, &tx_free_thresh, tx_conf);
+ if (ret) {
+ PMD_LOG_ERR(TX, "tx queue:%u arg validate failed", queue_idx);
+ goto end;
+ }
+
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+ txq = sxe2_tx_queue_alloc(dev, queue_idx, nb_desc, socket_id);
+ if (txq == NULL) {
+ PMD_LOG_ERR(TX, "failed to alloc sxe2vf tx queue:%u resource", queue_idx);
+ ret = -ENOMEM;
+ goto end;
+ }
+
+ txq->vlan_flag = SXE2_TX_FLAGS_VLAN_TAG_LOC_L2TAG1;
+ txq->ring_depth = nb_desc;
+ txq->rs_thresh = tx_rs_thresh;
+ txq->free_thresh = tx_free_thresh;
+ txq->pthresh = tx_conf->tx_thresh.pthresh;
+ txq->hthresh = tx_conf->tx_thresh.hthresh;
+ txq->wthresh = tx_conf->tx_thresh.wthresh;
+ txq->queue_id = queue_idx;
+ txq->idx_in_func = vsi->txqs.base_idx_in_func + queue_idx;
+ txq->port_id = dev->data->port_id;
+ txq->offloads = offloads;
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+ txq->vsi = vsi;
+ txq->ops = sxe2_tx_default_ops_get();
+ txq->ops.queue_reset(txq);
+
+ dev->data->tx_queues[queue_idx] = txq;
+ ret = 0;
+
+end:
+ return ret;
+}
+
+int32_t __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ int32_t ret = 0;
+ struct sxe2_tx_queue *txq;
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ PMD_INIT_FUNC_TRACE();
+
+ if (dev->data->tx_queue_state[queue_id] == RTE_ETH_QUEUE_STATE_STARTED) {
+ ret = 0;
+ goto l_end;
+ }
+
+ txq = dev->data->tx_queues[queue_id];
+ if (txq == NULL) {
+ PMD_LOG_ERR(TX, "tx queue:%u is not available or setup", queue_id);
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ ret = sxe2_drv_txq_ctxt_cfg(adapter, txq, 1);
+ if (ret) {
+ PMD_LOG_ERR(TX, "tx queue:%u config ctxt fail", queue_id);
+
+ (void)sxe2_drv_txq_switch(adapter, txq, false);
+ txq->ops.mbufs_release(txq);
+ txq->ops.queue_reset(txq);
+ goto l_end;
+ }
+
+ sxe2_tx_tail_init(adapter, txq);
+
+ dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+ ret = 0;
+
+l_end:
+ return ret;
+}
+
+int32_t __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ struct sxe2_tx_queue *txq;
+ uint16_t nb_txq;
+ uint16_t nb_started_txq;
+ int32_t ret;
+ PMD_INIT_FUNC_TRACE();
+
+ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+ txq = dev->data->tx_queues[nb_txq];
+ if (!txq || txq->tx_deferred_start)
+ continue;
+
+ ret = sxe2_tx_queue_start(dev, nb_txq);
+ if (ret) {
+ PMD_LOG_ERR(TX, "Fail to start tx queue %u", nb_txq);
+ goto l_free_started_queue;
+ }
+ }
+ ret = 0;
+ goto l_end;
+
+l_free_started_queue:
+ for (nb_started_txq = 0; nb_started_txq <= nb_txq; nb_started_txq++)
+ (void)sxe2_tx_queue_stop(dev, nb_started_txq);
+
+l_end:
+ return ret;
+}
+
+void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev)
+{
+ struct rte_eth_dev_data *data = dev->data;
+ uint16_t nb_txq;
+ int32_t ret;
+
+ for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) {
+ ret = sxe2_tx_queue_stop(dev, nb_txq);
+ if (ret) {
+ PMD_LOG_WARN(TX, "Fail to stop tx queue %u", nb_txq);
+ continue;
+ }
+ }
+}
diff --git a/drivers/net/sxe2/sxe2_tx.h b/drivers/net/sxe2/sxe2_tx.h
new file mode 100644
index 0000000000..c929b1bee2
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_tx.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_TX_H__
+#define __SXE2_TX_H__
+#include "sxe2_queue.h"
+
+void __rte_cold sxe2_tx_queue_reset(struct sxe2_tx_queue *txq);
+
+int32_t __rte_cold sxe2_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id);
+
+void sxe2_tx_queue_mbufs_release(struct sxe2_tx_queue *txq);
+
+int32_t __rte_cold sxe2_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id);
+
+int32_t __rte_cold sxe2_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t nb_desc, uint32_t socket_id,
+ const struct rte_eth_txconf *tx_conf);
+
+void __rte_cold sxe2_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
+
+void __rte_cold sxe2_all_txqs_release(struct rte_eth_dev *dev);
+
+void __rte_cold sxe2_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
+int32_t __rte_cold sxe2_txqs_all_start(struct rte_eth_dev *dev);
+
+void __rte_cold sxe2_txqs_all_stop(struct rte_eth_dev *dev);
+
+#endif /* __SXE2_TX_H__ */
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 09/11] drivers: add data path for Rx and Tx
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (7 preceding siblings ...)
2026-05-16 7:46 ` [PATCH v15 08/11] net/sxe2: support queue setup and control liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 10/11] net/sxe2: add vectorized " liujie5
2026-05-16 7:46 ` [PATCH v15 11/11] net/sxe2: implement Tx done cleanup liujie5
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
Implement receive and transmit burst functions for sxe2 PMD.
Add sxe2_recv_pkts and sxe2_xmit_pkts as the primary data path
interfaces.
The implementation includes:
- Efficient descriptor fetching and mbuf allocation for Rx.
- Descriptor setup and checksum offload handling for Tx.
- Buffer recycling and hardware tail pointer updates.
- Performance-oriented loop unrolling and prefetching where applicable.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/common/sxe2/sxe2_ioctl_chnl.c | 8 +-
drivers/net/sxe2/meson.build | 2 +
drivers/net/sxe2/sxe2_ethdev.c | 13 +-
drivers/net/sxe2/sxe2_txrx.c | 246 +++++++
drivers/net/sxe2/sxe2_txrx.h | 21 +
drivers/net/sxe2/sxe2_txrx_poll.c | 919 ++++++++++++++++++++++++++
6 files changed, 1204 insertions(+), 5 deletions(-)
create mode 100644 drivers/net/sxe2/sxe2_txrx.c
create mode 100644 drivers/net/sxe2/sxe2_txrx.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_poll.c
diff --git a/drivers/common/sxe2/sxe2_ioctl_chnl.c b/drivers/common/sxe2/sxe2_ioctl_chnl.c
index 08df9373d7..0522dc68b5 100644
--- a/drivers/common/sxe2/sxe2_ioctl_chnl.c
+++ b/drivers/common/sxe2/sxe2_ioctl_chnl.c
@@ -177,13 +177,13 @@ void
goto l_err;
}
- PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=0x%zx, src=0x%"PRIx64", offset=0x%"PRIx64"",
+ PMD_LOG_DEBUG(COM, "fd=%d, bar idx=%d, len=%"PRIu64", src=0x%"PRIx64", offset=0x%"PRIx64"",
bar_idx, cmd_fd, len, offset, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset));
virt = mmap(NULL, len, PROT_READ | PROT_WRITE,
MAP_SHARED, cmd_fd, SXE2_COM_PCI_OFFSET_GEN(bar_idx, offset));
if (virt == MAP_FAILED) {
- PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=0x%zx, offset=0x%"PRIx64", err:%s",
+ PMD_LOG_ERR(COM, "Failed mmap, cmd_fd=%d, len=%"PRIu64", offset=0x%"PRIx64", err:%s",
cmd_fd, len, offset, strerror(errno));
goto l_err;
}
@@ -205,12 +205,12 @@ sxe2_drv_dev_munmap(struct sxe2_common_device *cdev, void *virt, uint64_t len)
goto l_end;
}
- PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%zx",
+ PMD_LOG_DEBUG(COM, "Munmap virt=%p, len=0x%"PRIx64"",
virt, len);
ret = munmap(virt, len);
if (ret < 0) {
- PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=0x%zx, err:%s",
+ PMD_LOG_ERR(COM, "Failed to munmap, virt=%p, len=%"PRIu64", err:%s",
virt, len, strerror(errno));
ret = -errno;
goto l_end;
diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build
index 3dfe54903a..5645e3ad61 100644
--- a/drivers/net/sxe2/meson.build
+++ b/drivers/net/sxe2/meson.build
@@ -20,6 +20,8 @@ sources += files(
'sxe2_queue.c',
'sxe2_tx.c',
'sxe2_rx.c',
+ 'sxe2_txrx_poll.c',
+ 'sxe2_txrx.c',
)
allow_internal_get_api = true
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
index 6abb4672f6..8b76231057 100644
--- a/drivers/net/sxe2/sxe2_ethdev.c
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -26,6 +26,7 @@
#include "sxe2_cmd_chnl.h"
#include "sxe2_tx.h"
#include "sxe2_rx.h"
+#include "sxe2_txrx.h"
#include "sxe2_common.h"
#include "sxe2_common_log.h"
#include "sxe2_host_regs.h"
@@ -137,6 +138,9 @@ static int32_t sxe2_dev_start(struct rte_eth_dev *dev)
goto l_end;
}
+ sxe2_rx_mode_func_set(dev);
+ sxe2_tx_mode_func_set(dev);
+
ret = sxe2_queues_start(dev);
if (ret) {
PMD_LOG_ERR(INIT, "enable queues failed");
@@ -763,10 +767,17 @@ static int32_t sxe2_dev_init(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
+ sxe2_set_common_function(dev);
+
dev->dev_ops = &sxe2_eth_dev_ops;
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ sxe2_rx_mode_func_set(dev);
+ sxe2_tx_mode_func_set(dev);
+ if (ret != 0)
+ PMD_LOG_ERR(INIT, "Failed to mp init (secondary), ret=%d", ret);
goto l_end;
+ }
ret = sxe2_hw_init(dev);
if (ret) {
diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c
new file mode 100644
index 0000000000..2531b49a52
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx.c
@@ -0,0 +1,246 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <ethdev_driver.h>
+#include <unistd.h>
+
+#include "sxe2_txrx.h"
+#include "sxe2_txrx_common.h"
+#include "sxe2_txrx_poll.h"
+#include "sxe2_ethdev.h"
+
+#include "sxe2_common_log.h"
+#include "sxe2_osal.h"
+#include "sxe2_cmd_chnl.h"
+#if defined(RTE_ARCH_ARM64)
+#include <rte_cpuflags.h>
+#endif
+
+static int32_t sxe2_tx_desciptor_status(void *tx_queue, uint16_t offset)
+{
+ struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue;
+ int32_t ret;
+ uint16_t desc_idx;
+
+ if (unlikely(offset >= txq->ring_depth)) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ desc_idx = txq->next_use + offset;
+ desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh);
+ if (desc_idx >= txq->ring_depth) {
+ desc_idx -= txq->ring_depth;
+ if (desc_idx >= txq->ring_depth)
+ desc_idx -= txq->ring_depth;
+ }
+
+ if (desc_idx == 0)
+ desc_idx = txq->rs_thresh - 1;
+ else
+ desc_idx -= 1;
+
+ if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) ==
+ (txq->desc_ring[desc_idx].wb.dd &
+ rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK)))
+ ret = RTE_ETH_TX_DESC_DONE;
+ else
+ ret = RTE_ETH_TX_DESC_FULL;
+
+l_end:
+ return ret;
+}
+
+static inline int32_t sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf)
+{
+ struct rte_mbuf *m_seg = mbuf;
+
+ while (m_seg != NULL) {
+ if (m_seg->data_len == 0)
+ return -EINVAL;
+ m_seg = m_seg->next;
+ }
+
+ return 0;
+}
+
+uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_tx_queue *txq = tx_queue;
+ struct rte_mbuf *mbuf;
+ uint64_t ol_flags = 0;
+ int32_t ret = 0;
+ int32_t i = 0;
+
+ for (i = 0; i < nb_pkts; i++) {
+ mbuf = tx_pkts[i];
+ if (!mbuf)
+ continue;
+ ol_flags = mbuf->ol_flags;
+ if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) {
+ if (mbuf->nb_segs > SXE2_TX_MTU_SEG_MAX ||
+ mbuf->pkt_len > SXE2_FRAME_SIZE_MAX) {
+ rte_errno = -EINVAL;
+ goto l_end;
+ }
+ } else if ((mbuf->tso_segsz < SXE2_MIN_TSO_MSS) ||
+ (mbuf->tso_segsz > SXE2_MAX_TSO_MSS) ||
+ (mbuf->nb_segs > txq->ring_depth) ||
+ (mbuf->pkt_len > SXE2_TX_TSO_PKTLEN_MAX)) {
+ rte_errno = -EINVAL;
+ goto l_end;
+ }
+
+ if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) {
+ rte_errno = -EINVAL;
+ goto l_end;
+ }
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ ret = rte_validate_tx_offload(mbuf);
+ if (ret != 0) {
+ rte_errno = -ret;
+ goto l_end;
+ }
+#endif
+ ret = rte_net_intel_cksum_prepare(mbuf);
+ if (ret != 0) {
+ rte_errno = -ret;
+ goto l_end;
+ }
+
+ ret = sxe2_tx_mbuf_empty_check(mbuf);
+ if (ret != 0) {
+ rte_errno = -ret;
+ goto l_end;
+ }
+ }
+
+l_end:
+ return i;
+}
+
+void sxe2_tx_mode_func_set(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ uint32_t tx_mode_flags = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ dev->tx_pkt_prepare = sxe2_tx_pkts_prepare;
+ dev->tx_pkt_burst = sxe2_tx_pkts;
+ adapter->q_ctxt.tx_mode_flags = tx_mode_flags;
+ PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.",
+ tx_mode_flags, dev->data->port_id);
+}
+
+static int32_t sxe2_rx_desciptor_status(void *rx_queue, uint16_t offset)
+{
+ struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue;
+ volatile union sxe2_rx_desc *desc;
+ int32_t ret;
+
+ if (unlikely(offset >= rxq->ring_depth)) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+
+ if (offset >= rxq->ring_depth - rxq->hold_num) {
+ ret = RTE_ETH_RX_DESC_UNAVAIL;
+ goto l_end;
+ }
+
+ if (rxq->processing_idx + offset >= rxq->ring_depth)
+ desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth];
+ else
+ desc = &rxq->desc_ring[rxq->processing_idx + offset];
+
+ if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK)
+ ret = RTE_ETH_RX_DESC_DONE;
+ else
+ ret = RTE_ETH_RX_DESC_AVAIL;
+
+l_end:
+ PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u",
+ offset, ret, rxq->queue_id, rxq->port_id);
+ return ret;
+}
+
+static int32_t sxe2_rx_queue_count(void *rx_queue)
+{
+ struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue;
+ volatile union sxe2_rx_desc *desc;
+ uint16_t done_num = 0;
+
+ desc = &rxq->desc_ring[rxq->processing_idx];
+ while ((done_num < rxq->ring_depth) &&
+ (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) &
+ SXE2_RX_DESC_STATUS_DD_MASK)) {
+ done_num += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM;
+ if (rxq->processing_idx + done_num >= rxq->ring_depth)
+ desc = &rxq->desc_ring[rxq->processing_idx + done_num - rxq->ring_depth];
+ else
+ desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM;
+ }
+
+ PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u",
+ done_num, rxq->queue_id, rxq->port_id);
+
+ return done_num;
+}
+
+static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, uint64_t offload)
+{
+ struct sxe2_rx_queue *rxq;
+ bool en = false;
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i];
+ if (rxq == NULL)
+ continue;
+
+ if (0 != (rxq->offloads & offload)) {
+ en = true;
+ goto l_end;
+ }
+ }
+
+l_end:
+ return en;
+}
+
+void sxe2_rx_mode_func_set(struct rte_eth_dev *dev)
+{
+ struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
+ uint32_t rx_mode_flags = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT))
+ dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split;
+ else
+ dev->rx_pkt_burst = sxe2_rx_pkts_scattered;
+
+ PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.",
+ rx_mode_flags, dev->data->port_id);
+ adapter->q_ctxt.rx_mode_flags = rx_mode_flags;
+}
+
+void sxe2_set_common_function(struct rte_eth_dev *dev)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ dev->rx_queue_count = sxe2_rx_queue_count;
+ dev->rx_descriptor_status = sxe2_rx_desciptor_status;
+
+ dev->tx_descriptor_status = sxe2_tx_desciptor_status;
+ dev->tx_pkt_prepare = sxe2_tx_pkts_prepare;
+}
diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h
new file mode 100644
index 0000000000..f6558e2189
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef SXE2_TXRX_H
+#define SXE2_TXRX_H
+#include <ethdev_driver.h>
+#include "sxe2_queue.h"
+
+void sxe2_set_common_function(struct rte_eth_dev *dev);
+
+uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+void sxe2_tx_mode_func_set(struct rte_eth_dev *dev);
+
+void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq);
+
+void sxe2_rx_mode_func_set(struct rte_eth_dev *dev);
+
+#endif /* __SXE2_TXRX_H__ */
diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c
new file mode 100644
index 0000000000..6d37fdef36
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_poll.c
@@ -0,0 +1,919 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <rte_common.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <ethdev_driver.h>
+#include <unistd.h>
+
+#include "sxe2_osal.h"
+#include "sxe2_txrx_common.h"
+#include "sxe2_txrx_poll.h"
+#include "sxe2_txrx.h"
+#include "sxe2_queue.h"
+#include "sxe2_ethdev.h"
+#include "sxe2_common_log.h"
+
+static __rte_always_inline int32_t
+sxe2_tx_bufs_free(struct sxe2_tx_queue *txq)
+{
+ struct sxe2_tx_buffer *buffer;
+ struct rte_mbuf *mbuf;
+ struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX];
+ int32_t ret;
+ uint32_t i;
+ uint16_t rs_thresh;
+ uint16_t free_num;
+ if ((txq->desc_ring[txq->next_dd].wb.dd &
+ rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) !=
+ rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) {
+ ret = 0;
+ goto l_end;
+ }
+ rs_thresh = txq->rs_thresh;
+ buffer = &txq->buffer_ring[txq->next_dd - rs_thresh + 1];
+ if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
+ if (likely(rs_thresh <= SXE2_TX_FREE_BUFFER_SIZE_MAX)) {
+ mbuf = buffer[0].mbuf;
+ mbuf_free_arr[0] = mbuf;
+ free_num = 1;
+ for (i = 1; i < rs_thresh; ++i) {
+ mbuf = buffer[i].mbuf;
+ if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) {
+ mbuf_free_arr[free_num] = mbuf;
+ free_num++;
+ } else {
+ rte_mempool_put_bulk(mbuf_free_arr[0]->pool,
+ (void *)mbuf_free_arr, free_num);
+ mbuf_free_arr[0] = mbuf;
+ free_num = 1;
+ }
+ }
+ rte_mempool_put_bulk(mbuf_free_arr[0]->pool,
+ (void *)mbuf_free_arr, free_num);
+ } else {
+ for (i = 0; i < rs_thresh; ++i, ++buffer) {
+ rte_mempool_put(buffer->mbuf->pool, buffer->mbuf);
+ buffer->mbuf = NULL;
+ }
+ }
+ } else {
+ for (i = 0; i < rs_thresh; ++i, ++buffer) {
+ mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf);
+ if (mbuf != NULL)
+ rte_mempool_put(mbuf->pool, mbuf);
+ buffer->mbuf = NULL;
+ }
+ }
+ txq->desc_free_num += rs_thresh;
+ txq->next_dd += rs_thresh;
+ if (txq->next_dd >= txq->ring_depth)
+ txq->next_dd = rs_thresh - 1;
+ ret = rs_thresh;
+l_end:
+ return ret;
+}
+
+static inline int32_t sxe2_tx_cleanup(struct sxe2_tx_queue *txq)
+{
+ int32_t ret = 0;
+ volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring;
+ struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring;
+ uint16_t ring_depth = txq->ring_depth;
+ uint16_t next_clean = txq->next_clean;
+ uint16_t clean_last;
+ uint16_t clean_num;
+
+ clean_last = next_clean + txq->rs_thresh;
+ if (clean_last >= ring_depth)
+ clean_last = clean_last - ring_depth;
+
+ clean_last = buffer_ring[clean_last].last_id;
+ if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) !=
+ (txq->desc_ring[clean_last].wb.dd & rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK))) {
+ PMD_LOG_DEBUG(TX, "desc[%u] is not done.port_id=%u queue_id=%u val=0x%" PRIx64,
+ clean_last, txq->port_id,
+ txq->queue_id, txq->desc_ring[clean_last].wb.dd);
+ ret = -1;
+ goto l_end;
+ }
+
+ if (clean_last > next_clean)
+ clean_num = clean_last - next_clean;
+ else
+ clean_num = ring_depth - next_clean + clean_last;
+
+ desc_ring[clean_last].wb.dd = 0;
+
+ txq->next_clean = clean_last;
+ txq->desc_free_num += clean_num;
+
+ ret = 0;
+
+l_end:
+ return ret;
+}
+
+static __rte_always_inline uint16_t
+sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt)
+{
+ struct rte_mbuf *m_seg = tx_pkt;
+ uint16_t count = 0;
+
+ while (m_seg != NULL) {
+ count += DIV_ROUND_UP(m_seg->data_len,
+ SXE2_TX_MAX_DATA_NUM_PER_DESC);
+ m_seg = m_seg->next;
+ }
+
+ return count;
+}
+
+static __rte_always_inline void
+sxe2_tx_desc_checksum_fill(uint64_t offloads, uint32_t *desc_cmd, uint32_t *desc_offset,
+ union sxe2_tx_offload_info ol_info)
+{
+ if (offloads & RTE_MBUF_F_TX_IP_CKSUM) {
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM;
+ *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len);
+ } else if (offloads & RTE_MBUF_F_TX_IPV4) {
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4;
+ *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len);
+ } else if (offloads & RTE_MBUF_F_TX_IPV6) {
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6;
+ *desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(ol_info.l3_len);
+ }
+
+ if (offloads & RTE_MBUF_F_TX_TCP_SEG) {
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP;
+ *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len);
+ goto l_end;
+ }
+
+ if (offloads & RTE_MBUF_F_TX_UDP_SEG) {
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP;
+ *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len);
+ goto l_end;
+ }
+
+ switch (offloads & RTE_MBUF_F_TX_L4_MASK) {
+ case RTE_MBUF_F_TX_TCP_CKSUM:
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP;
+ *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len);
+ break;
+ case RTE_MBUF_F_TX_SCTP_CKSUM:
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP;
+ *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len);
+ break;
+ case RTE_MBUF_F_TX_UDP_CKSUM:
+ *desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP;
+ *desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(ol_info.l4_len);
+ break;
+ default:
+
+ break;
+ }
+
+l_end:
+ return;
+}
+
+static __rte_always_inline uint64_t
+sxe2_tx_data_desc_build_cobt(uint32_t cmd, uint32_t offset, uint16_t buf_size, uint16_t l2tag)
+{
+ return rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DATA |
+ (((uint64_t)cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT) |
+ (((uint64_t)offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT) |
+ (((uint64_t)buf_size) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT) |
+ (((uint64_t)l2tag) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT));
+}
+
+uint16_t sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_tx_queue *txq = tx_queue;
+ struct sxe2_tx_buffer *buffer_ring;
+ struct sxe2_tx_buffer *buffer;
+ struct sxe2_tx_buffer *next_buffer;
+ struct rte_mbuf *tx_pkt;
+ struct rte_mbuf *m_seg;
+ volatile union sxe2_tx_data_desc *desc_ring;
+ volatile union sxe2_tx_data_desc *desc;
+ volatile struct sxe2_tx_context_desc *ctxt_desc;
+ union sxe2_tx_offload_info ol_info;
+ struct sxe2_vsi *vsi = txq->vsi;
+ rte_iova_t buf_dma_addr;
+ uint64_t offloads;
+ uint64_t desc_type_cmd_tso_mss;
+ uint32_t desc_cmd;
+ uint32_t desc_offset;
+ uint32_t desc_tag;
+ uint32_t desc_tunneling_params;
+ uint16_t ipsec_offset;
+ uint16_t ctxt_desc_num;
+ uint16_t desc_sum_num;
+ uint16_t tx_num;
+ uint16_t seg_len;
+ uint16_t next_use;
+ uint16_t last_use;
+ uint16_t desc_l2tag2;
+
+ buffer_ring = txq->buffer_ring;
+ desc_ring = txq->desc_ring;
+ next_use = txq->next_use;
+ buffer = &buffer_ring[next_use];
+
+ if (txq->desc_free_num < txq->free_thresh)
+ (void)sxe2_tx_cleanup(txq);
+
+ for (tx_num = 0; tx_num < nb_pkts; tx_num++) {
+ tx_pkt = *tx_pkts++;
+ desc_cmd = 0;
+ desc_offset = 0;
+ desc_tag = 0;
+ desc_tunneling_params = 0;
+ ipsec_offset = 0;
+ offloads = tx_pkt->ol_flags;
+ ol_info.l2_len = tx_pkt->l2_len;
+ ol_info.l3_len = tx_pkt->l3_len;
+ ol_info.l4_len = tx_pkt->l4_len;
+ ol_info.tso_segsz = tx_pkt->tso_segsz;
+ ol_info.outer_l2_len = tx_pkt->outer_l2_len;
+ ol_info.outer_l3_len = tx_pkt->outer_l3_len;
+
+ ctxt_desc_num = (offloads &
+ SXE2_TX_OFFLOAD_CTXT_NEEDCK_MASK) ? 1 : 0;
+ if (unlikely(vsi->vsi_type == SXE2_VSI_T_DPDK_ESW))
+ ctxt_desc_num = 1;
+
+ if (offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))
+ desc_sum_num = sxe2_tx_pkt_data_desc_count(tx_pkt) + ctxt_desc_num;
+ else
+ desc_sum_num = tx_pkt->nb_segs + ctxt_desc_num;
+
+ last_use = next_use + desc_sum_num - 1;
+ if (last_use >= txq->ring_depth)
+ last_use = last_use - txq->ring_depth;
+
+ if (desc_sum_num > txq->desc_free_num) {
+ if (unlikely(sxe2_tx_cleanup(txq) != 0))
+ goto l_exit_logic;
+
+ if (unlikely(desc_sum_num > txq->rs_thresh)) {
+ while (desc_sum_num > txq->desc_free_num)
+ if (unlikely(sxe2_tx_cleanup(txq) != 0))
+ goto l_exit_logic;
+ }
+ }
+
+ desc_offset |= SXE2_TX_DATA_DESC_MACLEN_VAL(ol_info.l2_len);
+
+ if (offloads & SXE2_TX_OFFLOAD_CKSUM_MASK) {
+ sxe2_tx_desc_checksum_fill(offloads, &desc_cmd,
+ &desc_offset, ol_info);
+ }
+
+ if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1;
+ desc_tag = tx_pkt->vlan_tci;
+ }
+
+ if (ctxt_desc_num) {
+ ctxt_desc = (volatile struct sxe2_tx_context_desc *)
+ &desc_ring[next_use];
+ desc_l2tag2 = 0;
+ desc_type_cmd_tso_mss = SXE2_TX_DESC_DTYPE_CTXT;
+
+ next_buffer = &buffer_ring[buffer->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf);
+
+ if (buffer->mbuf) {
+ rte_pktmbuf_free_seg(buffer->mbuf);
+ buffer->mbuf = NULL;
+ }
+
+ if (offloads & RTE_MBUF_F_TX_QINQ) {
+ desc_l2tag2 = tx_pkt->vlan_tci_outer;
+ desc_type_cmd_tso_mss |= SXE2_TX_CTXT_DESC_CMD_IL2TAG2_MASK;
+ }
+
+ ctxt_desc->tunneling_params =
+ rte_cpu_to_le_32(desc_tunneling_params);
+ ctxt_desc->l2tag2 = rte_cpu_to_le_16(desc_l2tag2);
+ ctxt_desc->type_cmd_tso_mss = rte_cpu_to_le_64(desc_type_cmd_tso_mss);
+ ctxt_desc->ipsec_offset = rte_cpu_to_le_64(ipsec_offset);
+
+ buffer->last_id = last_use;
+ next_use = buffer->next_id;
+ buffer = next_buffer;
+ }
+
+ m_seg = tx_pkt;
+
+ do {
+ desc = &desc_ring[next_use];
+ next_buffer = &buffer_ring[buffer->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf);
+ if (buffer->mbuf) {
+ rte_pktmbuf_free_seg(buffer->mbuf);
+ buffer->mbuf = NULL;
+ }
+
+ buffer->mbuf = m_seg;
+ seg_len = m_seg->data_len;
+ buf_dma_addr = rte_mbuf_data_iova(m_seg);
+ while ((offloads & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)) &&
+ unlikely(seg_len > SXE2_TX_MAX_DATA_NUM_PER_DESC)) {
+ desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ desc->read.type_cmd_off_bsz_l2t =
+ sxe2_tx_data_desc_build_cobt(desc_cmd, desc_offset,
+ SXE2_TX_MAX_DATA_NUM_PER_DESC,
+ desc_tag);
+ buf_dma_addr += SXE2_TX_MAX_DATA_NUM_PER_DESC;
+ seg_len -= SXE2_TX_MAX_DATA_NUM_PER_DESC;
+ buffer->last_id = last_use;
+ next_use = buffer->next_id;
+ buffer = next_buffer;
+ desc = &desc_ring[next_use];
+ next_buffer = &buffer_ring[buffer->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(next_buffer->mbuf);
+ }
+
+ desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ desc->read.type_cmd_off_bsz_l2t =
+ sxe2_tx_data_desc_build_cobt(desc_cmd,
+ desc_offset, seg_len, desc_tag);
+
+ buffer->last_id = last_use;
+ next_use = buffer->next_id;
+ buffer = next_buffer;
+
+ m_seg = m_seg->next;
+ } while (m_seg);
+
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_EOP;
+ txq->desc_used_num += desc_sum_num;
+ txq->desc_free_num -= desc_sum_num;
+
+ if (txq->desc_used_num >= txq->rs_thresh) {
+ PMD_LOG_DEBUG(TX, "Tx pkts set RS bit."
+ "last_use=%u port_id=%u, queue_id=%u",
+ last_use, txq->port_id, txq->queue_id);
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_RS;
+ txq->desc_used_num = 0;
+ }
+
+ desc->read.type_cmd_off_bsz_l2t |=
+ rte_cpu_to_le_64(((uint64_t)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT);
+ }
+
+l_exit_logic:
+ if (tx_num == 0)
+ goto l_end;
+ goto l_end_of_tx;
+l_end_of_tx:
+ SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use);
+ PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u",
+ txq->port_id, txq->queue_id, next_use, tx_num);
+
+ txq->next_use = next_use;
+
+l_end:
+ return tx_num;
+}
+
+static __rte_always_inline void
+sxe2_tx_data_desc_fill(volatile union sxe2_tx_data_desc *desc,
+ struct rte_mbuf **tx_pkts)
+{
+ rte_iova_t buf_dma_addr;
+ uint32_t desc_offset;
+ buf_dma_addr = rte_mbuf_data_iova(*tx_pkts);
+ desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len);
+ desc->read.type_cmd_off_bsz_l2t =
+ sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP,
+ desc_offset, (*tx_pkts)->data_len, 0);
+}
+static __rte_always_inline void
+sxe2_tx_data_desc_fill_batch(volatile union sxe2_tx_data_desc *desc,
+ struct rte_mbuf **tx_pkts)
+{
+ rte_iova_t buf_dma_addr;
+ uint32_t i;
+ uint32_t desc_offset;
+ for (i = 0; i < SXE2_TX_FILL_PER_LOOP; ++i, ++desc, ++tx_pkts) {
+ buf_dma_addr = rte_mbuf_data_iova(*tx_pkts);
+ desc->read.buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL((*tx_pkts)->l2_len);
+ desc->read.type_cmd_off_bsz_l2t =
+ sxe2_tx_data_desc_build_cobt(SXE2_TX_DATA_DESC_CMD_EOP,
+ desc_offset,
+ (*tx_pkts)->data_len,
+ 0);
+ }
+}
+
+static inline void sxe2_tx_ring_fill(struct sxe2_tx_queue *txq,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_tx_buffer *buffer = &txq->buffer_ring[txq->next_use];
+ volatile union sxe2_tx_data_desc *desc = &txq->desc_ring[txq->next_use];
+ uint32_t i, j;
+ uint32_t mainpart;
+ uint32_t leftover;
+ mainpart = nb_pkts & ((uint32_t)~SXE2_TX_FILL_PER_LOOP_MASK);
+ leftover = nb_pkts & ((uint32_t)SXE2_TX_FILL_PER_LOOP_MASK);
+ for (i = 0; i < mainpart; i += SXE2_TX_FILL_PER_LOOP) {
+ for (j = 0; j < SXE2_TX_FILL_PER_LOOP; ++j)
+ (buffer + i + j)->mbuf = *(tx_pkts + i + j);
+ sxe2_tx_data_desc_fill_batch(desc + i, tx_pkts + i);
+ }
+ if (unlikely(leftover > 0)) {
+ for (i = 0; i < leftover; ++i) {
+ (buffer + mainpart + i)->mbuf = *(tx_pkts + mainpart + i);
+ sxe2_tx_data_desc_fill(desc + mainpart + i,
+ tx_pkts + mainpart + i);
+ }
+ }
+}
+
+static inline uint16_t sxe2_tx_pkts_batch(void *tx_queue,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue;
+ volatile union sxe2_tx_data_desc *desc_ring = txq->desc_ring;
+ uint16_t res_num = 0;
+ if (txq->desc_free_num < txq->free_thresh)
+ (void)sxe2_tx_bufs_free(txq);
+ nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts);
+ if (unlikely(nb_pkts == 0)) {
+ PMD_LOG_DEBUG(TX, "Tx batch: may not enough free desc, "
+ "free_desc=%u, need_tx_pkts=%u",
+ txq->desc_free_num, nb_pkts);
+ goto l_end;
+ }
+ txq->desc_free_num -= nb_pkts;
+ if ((txq->next_use + nb_pkts) > txq->ring_depth) {
+ res_num = txq->ring_depth - txq->next_use;
+ sxe2_tx_ring_fill(txq, tx_pkts, res_num);
+ desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |=
+ rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK);
+ txq->next_rs = txq->rs_thresh - 1;
+ txq->next_use = 0;
+ }
+ sxe2_tx_ring_fill(txq, tx_pkts + res_num, nb_pkts - res_num);
+ txq->next_use = txq->next_use + (nb_pkts - res_num);
+ if (txq->next_use > txq->next_rs) {
+ desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |=
+ rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK);
+ txq->next_rs += txq->rs_thresh;
+ if (txq->next_rs >= txq->ring_depth)
+ txq->next_rs = txq->rs_thresh - 1;
+ }
+ if (txq->next_use >= txq->ring_depth)
+ txq->next_use = 0;
+ PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u",
+ txq->port_id, txq->queue_id, txq->next_use, nb_pkts);
+ SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, txq->next_use);
+l_end:
+ return nb_pkts;
+}
+
+static inline void
+sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, uint16_t hold_num, uint16_t rx_id)
+{
+ hold_num += rxq->hold_num;
+
+ if (hold_num > rxq->rx_free_thresh) {
+ rx_id = (uint16_t)((rx_id == 0) ? (rxq->ring_depth - 1) : (rx_id - 1));
+ SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, rx_id);
+ hold_num = 0;
+ }
+ rxq->hold_num = hold_num;
+}
+
+static inline uint64_t
+sxe2_rx_desc_error_para(__rte_unused struct sxe2_rx_queue *rxq,
+ union sxe2_rx_desc *desc)
+{
+ uint64_t flags = 0;
+ uint64_t desc_qw1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len);
+
+ if (unlikely(0 == (desc_qw1 & SXE2_RX_DESC_STATUS_L3L4_P_MASK)))
+ goto l_end;
+
+ if (likely(0 == (desc->wb.rxdid_src & SXE2_RX_DESC_EUDPE_MASK)))
+ flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+ else
+ flags = RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+
+ if (likely(0 == (desc_qw1 & SXE2_RX_DESC_QW1_ERRORS_MASK))) {
+ flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+ RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD);
+ goto l_end;
+ }
+
+ if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_IPE_MASK)))
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+ else
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+
+ if (likely(0 == (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_L4_MASK)))
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+ else
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+
+ if (unlikely(0 != (desc_qw1 & SXE2_RX_DESC_ERROR_CSUM_EIP_MASK)))
+ flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+l_end:
+ return flags;
+}
+
+static __rte_always_inline void
+sxe2_rx_mbuf_common_fields_fill(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf,
+ union sxe2_rx_desc *rxd)
+{
+ uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+ uint64_t qword1;
+ uint64_t pkt_flags;
+ qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len);
+
+ mbuf->ol_flags = 0;
+ mbuf->packet_type = ptype_tbl[SXE2_RX_DESC_PTYPE_VAL_GET(qword1)];
+
+ pkt_flags = sxe2_rx_desc_error_para(rxq, rxd);
+
+ mbuf->ol_flags |= pkt_flags;
+}
+
+static __rte_always_inline void
+sxe2_rx_sw_stats_update(struct sxe2_rx_queue *rxq, struct rte_mbuf *mbuf,
+ union sxe2_rx_desc *rxd)
+{
+ uint64_t qword1 = rte_le_to_cpu_64(rxd->wb.status_err_ptype_len);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes,
+ mbuf->pkt_len + RTE_ETHER_CRC_LEN,
+ rte_memory_order_relaxed);
+ switch (SXE2_RX_DESC_STATUS_UMBCAST_VAL_GET(qword1)) {
+ case SXE2_RX_DESC_STATUS_UNICAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ case SXE2_RX_DESC_STATUS_MUTICAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ case SXE2_RX_DESC_STATUS_BOARDCAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ default:
+ break;
+ }
+}
+
+uint16_t sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue;
+ volatile union sxe2_rx_desc *desc_ring;
+ volatile union sxe2_rx_desc *desc;
+ union sxe2_rx_desc desc_tmp;
+ struct rte_mbuf **buffer_ring;
+ struct rte_mbuf **cur_buffer;
+ struct rte_mbuf *cur_mbuf;
+ struct rte_mbuf *new_mbuf;
+ struct rte_mbuf *first_seg;
+ struct rte_mbuf *last_seg;
+ uint64_t qword1;
+ uint16_t done_num;
+ uint16_t hold_num;
+ uint16_t cur_idx;
+ uint16_t pkt_len;
+
+ desc_ring = rxq->desc_ring;
+ buffer_ring = rxq->buffer_ring;
+ cur_idx = rxq->processing_idx;
+ first_seg = rxq->pkt_first_seg;
+ last_seg = rxq->pkt_last_seg;
+ done_num = 0;
+ hold_num = 0;
+ while (done_num < nb_pkts) {
+ desc = &desc_ring[cur_idx];
+ qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len);
+ if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1))
+ break;
+
+ new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
+ if (unlikely(new_mbuf == NULL)) {
+ rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++;
+ PMD_LOG_INFO(RX, "Rx new_mbuf alloc failed port_id:%u "
+ "queue_id:%u", rxq->port_id, rxq->queue_id);
+ break;
+ }
+
+ hold_num++;
+ desc_tmp = *desc;
+ cur_buffer = &buffer_ring[cur_idx];
+ cur_idx++;
+ if (unlikely(cur_idx == rxq->ring_depth))
+ cur_idx = 0;
+
+ rte_prefetch0(buffer_ring[cur_idx]);
+
+ if (0 == (cur_idx & 0x3)) {
+ rte_prefetch0(&desc_ring[cur_idx]);
+ rte_prefetch0(&buffer_ring[cur_idx]);
+ }
+
+ cur_mbuf = *cur_buffer;
+
+ *cur_buffer = new_mbuf;
+
+ desc->read.hdr_addr = 0;
+ desc->read.pkt_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf));
+
+ pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1);
+ cur_mbuf->data_len = pkt_len;
+ cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+
+ if (first_seg == NULL) {
+ first_seg = cur_mbuf;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = pkt_len;
+ } else {
+ first_seg->pkt_len += pkt_len;
+ first_seg->nb_segs++;
+ last_seg->next = cur_mbuf;
+ }
+
+ if (0 == (qword1 & SXE2_RX_DESC_STATUS_EOP_MASK)) {
+ last_seg = cur_mbuf;
+ continue;
+ }
+
+ if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) ||
+ unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) {
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes,
+ first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN,
+ rte_memory_order_relaxed);
+ rte_pktmbuf_free(first_seg);
+ first_seg = NULL;
+ continue;
+ }
+
+ cur_mbuf->next = NULL;
+ if (unlikely(rxq->crc_len > 0)) {
+ first_seg->pkt_len -= RTE_ETHER_CRC_LEN;
+
+ if (pkt_len <= RTE_ETHER_CRC_LEN) {
+ rte_pktmbuf_free_seg(cur_mbuf);
+ first_seg->nb_segs--;
+ last_seg->data_len = last_seg->data_len + pkt_len -
+ RTE_ETHER_CRC_LEN;
+ last_seg->next = NULL;
+ } else {
+ cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN;
+ }
+
+ } else if (pkt_len == 0) {
+ rte_pktmbuf_free_seg(cur_mbuf);
+ first_seg->nb_segs--;
+ last_seg->next = NULL;
+ }
+
+ rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off));
+ first_seg->port = rxq->port_id;
+
+ sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp);
+
+ if (rxq->vsi->adapter->devargs.sw_stats_en)
+ sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp);
+
+ rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off));
+
+ rx_pkts[done_num] = first_seg;
+ done_num++;
+
+ first_seg = NULL;
+ }
+
+ rxq->processing_idx = cur_idx;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
+
+ sxe2_update_rx_tail(rxq, hold_num, cur_idx);
+
+ return done_num;
+}
+
+uint16_t sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue;
+ volatile union sxe2_rx_desc *desc_ring;
+ volatile union sxe2_rx_desc *desc;
+ union sxe2_rx_desc desc_tmp;
+ struct rte_mbuf **buffer_ring;
+ struct rte_mbuf **cur_buffer;
+ struct rte_mbuf *cur_mbuf;
+ struct rte_mbuf *cur_mbuf_pay;
+ struct rte_mbuf *new_mbuf;
+ struct rte_mbuf *new_mbuf_pay = NULL;
+ struct rte_mbuf *first_seg;
+ struct rte_mbuf *last_seg;
+ uint64_t qword1;
+ uint16_t done_num;
+ uint16_t hold_num;
+ uint16_t cur_idx;
+ uint16_t pkt_len;
+ uint16_t hdr_len;
+
+ desc_ring = rxq->desc_ring;
+ buffer_ring = rxq->buffer_ring;
+ cur_idx = rxq->processing_idx;
+ first_seg = rxq->pkt_first_seg;
+ last_seg = rxq->pkt_last_seg;
+ done_num = 0;
+ hold_num = 0;
+ new_mbuf = NULL;
+
+ while (done_num < nb_pkts) {
+ desc = &desc_ring[cur_idx];
+ qword1 = rte_le_to_cpu_64(desc->wb.status_err_ptype_len);
+
+ if (0 == (SXE2_RX_DESC_STATUS_DD_MASK & qword1))
+ break;
+
+ if ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0 ||
+ first_seg == NULL) {
+ new_mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
+ if (unlikely(new_mbuf == NULL)) {
+ rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++;
+ break;
+ }
+ }
+
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+ new_mbuf_pay = rte_mbuf_raw_alloc(rxq->rx_seg[1].mp);
+ if (unlikely(new_mbuf_pay == NULL)) {
+ rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed++;
+ if (new_mbuf != NULL)
+ rte_pktmbuf_free(new_mbuf);
+ new_mbuf = NULL;
+ break;
+ }
+ }
+
+ hold_num++;
+ desc_tmp = *desc;
+ cur_buffer = &buffer_ring[cur_idx];
+ cur_idx++;
+ if (unlikely(cur_idx == rxq->ring_depth))
+ cur_idx = 0;
+ rte_prefetch0(buffer_ring[cur_idx]);
+ if (0 == (cur_idx & 0x3)) {
+ rte_prefetch0(&desc_ring[cur_idx]);
+ rte_prefetch0(&buffer_ring[cur_idx]);
+ }
+ cur_mbuf = *cur_buffer;
+ if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
+ *cur_buffer = new_mbuf;
+ desc->read.hdr_addr = 0;
+ desc->read.pkt_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf));
+ } else {
+ if (first_seg == NULL) {
+ *cur_buffer = new_mbuf;
+ new_mbuf->next = new_mbuf_pay;
+ new_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ new_mbuf_pay->next = NULL;
+ new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM;
+ desc->read.hdr_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf));
+ desc->read.pkt_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay));
+ } else {
+ cur_mbuf_pay = cur_mbuf->next;
+ new_mbuf_pay->next = NULL;
+ new_mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM;
+ cur_mbuf->next = new_mbuf_pay;
+ desc->read.hdr_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(cur_mbuf));
+ desc->read.pkt_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mbuf_pay));
+ cur_mbuf = cur_mbuf_pay;
+ }
+ }
+ new_mbuf = NULL;
+ if (0 == (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
+ pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1);
+ cur_mbuf->data_len = pkt_len;
+ cur_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ if (first_seg == NULL) {
+ first_seg = cur_mbuf;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = pkt_len;
+ } else {
+ first_seg->pkt_len += pkt_len;
+ first_seg->nb_segs++;
+ last_seg->next = cur_mbuf;
+ }
+ } else {
+ if (first_seg == NULL) {
+ cur_mbuf->nb_segs = 2;
+ cur_mbuf->next->next = NULL;
+ pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1);
+ hdr_len = SXE2_RX_DESC_HDR_LEN_VAL_GET(qword1);
+ cur_mbuf->data_len = hdr_len;
+ cur_mbuf->pkt_len = hdr_len + pkt_len;
+ cur_mbuf->next->data_len = pkt_len;
+ first_seg = cur_mbuf;
+ cur_mbuf = cur_mbuf->next;
+ last_seg = cur_mbuf;
+ } else {
+ cur_mbuf->nb_segs = 1;
+ cur_mbuf->next = NULL;
+ pkt_len = SXE2_RX_DESC_PKT_LEN_VAL_GET(qword1);
+ cur_mbuf->data_len = pkt_len;
+
+ first_seg->pkt_len += pkt_len;
+ first_seg->nb_segs++;
+ last_seg->next = cur_mbuf;
+ }
+ }
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+
+ rte_pktmbuf_dump(stdout, first_seg, rte_pktmbuf_pkt_len(first_seg));
+#endif
+
+ if (0 == (rte_le_to_cpu_64(desc_tmp.wb.status_err_ptype_len) &
+ SXE2_RX_DESC_STATUS_EOP_MASK)) {
+ last_seg = cur_mbuf;
+ continue;
+ }
+
+ if (unlikely(qword1 & SXE2_RX_DESC_ERROR_RXE_MASK) ||
+ unlikely(qword1 & SXE2_RX_DESC_ERROR_OVERSIZE_MASK)) {
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes,
+ first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN,
+ rte_memory_order_relaxed);
+ rte_pktmbuf_free(first_seg);
+ first_seg = NULL;
+ continue;
+ }
+
+ cur_mbuf->next = NULL;
+ if (unlikely(rxq->crc_len > 0)) {
+ first_seg->pkt_len -= RTE_ETHER_CRC_LEN;
+ if (pkt_len <= RTE_ETHER_CRC_LEN) {
+ rte_pktmbuf_free_seg(cur_mbuf);
+ cur_mbuf = NULL;
+ first_seg->nb_segs--;
+ last_seg->data_len = last_seg->data_len +
+ pkt_len - RTE_ETHER_CRC_LEN;
+ last_seg->next = NULL;
+ } else {
+ cur_mbuf->data_len = pkt_len - RTE_ETHER_CRC_LEN;
+ }
+ } else if (pkt_len == 0) {
+ rte_pktmbuf_free_seg(cur_mbuf);
+ cur_mbuf = NULL;
+ first_seg->nb_segs--;
+ last_seg->next = NULL;
+ }
+
+ first_seg->port = rxq->port_id;
+ sxe2_rx_mbuf_common_fields_fill(rxq, first_seg, &desc_tmp);
+
+ if (rxq->vsi->adapter->devargs.sw_stats_en)
+ sxe2_rx_sw_stats_update(rxq, first_seg, &desc_tmp);
+
+ rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, first_seg->data_off));
+
+ rx_pkts[done_num] = first_seg;
+ done_num++;
+
+ first_seg = NULL;
+ }
+
+ rxq->processing_idx = cur_idx;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
+
+ sxe2_update_rx_tail(rxq, hold_num, cur_idx);
+
+ return done_num;
+}
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 10/11] net/sxe2: add vectorized Rx and Tx
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (8 preceding siblings ...)
2026-05-16 7:46 ` [PATCH v15 09/11] drivers: add data path for Rx and Tx liujie5
@ 2026-05-16 7:46 ` liujie5
2026-05-16 7:46 ` [PATCH v15 11/11] net/sxe2: implement Tx done cleanup liujie5
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
This patch implements the vectorized data path for the sxe2 PMD.
It utilizes SIMD instructions (e.g., SSE) to process multiple
packets simultaneously, significantly improving throughput for
small packet processing.
The implementation includes:
* Vectorized Rx burst function for bulk descriptor processing.
* Vectorized Tx burst function with optimized resource cleanup.
* Capability flags update to reflect vectorized path support.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/net/sxe2/meson.build | 5 +
drivers/net/sxe2/sxe2_ethdev.c | 31 +-
drivers/net/sxe2/sxe2_queue.c | 28 ++
drivers/net/sxe2/sxe2_queue.h | 4 +
drivers/net/sxe2/sxe2_txrx.c | 199 +++++++--
drivers/net/sxe2/sxe2_txrx.h | 11 +-
drivers/net/sxe2/sxe2_txrx_poll.c | 31 +-
drivers/net/sxe2/sxe2_txrx_poll.h | 4 +
drivers/net/sxe2/sxe2_txrx_vec.c | 200 +++++++++
drivers/net/sxe2/sxe2_txrx_vec.h | 73 ++++
drivers/net/sxe2/sxe2_txrx_vec_common.h | 235 ++++++++++
drivers/net/sxe2/sxe2_txrx_vec_sse.c | 549 ++++++++++++++++++++++++
12 files changed, 1296 insertions(+), 74 deletions(-)
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.c
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_common.h
create mode 100644 drivers/net/sxe2/sxe2_txrx_vec_sse.c
diff --git a/drivers/net/sxe2/meson.build b/drivers/net/sxe2/meson.build
index 5645e3ad61..3df57aee8c 100644
--- a/drivers/net/sxe2/meson.build
+++ b/drivers/net/sxe2/meson.build
@@ -13,6 +13,10 @@ deps += ['common_sxe2', 'hash','cryptodev','security']
includes += include_directories('../../common/sxe2')
+if arch_subdir == 'x86'
+ sources += files('sxe2_txrx_vec_sse.c')
+endif
+
sources += files(
'sxe2_ethdev.c',
'sxe2_cmd_chnl.c',
@@ -22,6 +26,7 @@ sources += files(
'sxe2_rx.c',
'sxe2_txrx_poll.c',
'sxe2_txrx.c',
+ 'sxe2_txrx_vec.c',
)
allow_internal_get_api = true
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
index 8b76231057..d1bdc22bd0 100644
--- a/drivers/net/sxe2/sxe2_ethdev.c
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -107,25 +107,6 @@ static int32_t sxe2_dev_stop(struct rte_eth_dev *dev)
return ret;
}
-static int32_t sxe2_queues_start(struct rte_eth_dev *dev)
-{
- int32_t ret = 0;
- ret = sxe2_txqs_all_start(dev);
- if (ret) {
- PMD_LOG_ERR(INIT, "Failed to start tx queue.");
- goto l_end;
- }
-
- ret = sxe2_rxqs_all_start(dev);
- if (ret) {
- PMD_LOG_ERR(INIT, "Failed to start rx queue.");
- sxe2_txqs_all_stop(dev);
- }
-
-l_end:
- return ret;
-}
-
static int32_t sxe2_dev_start(struct rte_eth_dev *dev)
{
int32_t ret = 0;
@@ -158,7 +139,7 @@ static int32_t sxe2_dev_start(struct rte_eth_dev *dev)
static int32_t sxe2_dev_close(struct rte_eth_dev *dev)
{
(void)sxe2_dev_stop(dev);
-
+ (void)sxe2_queues_release(dev);
sxe2_vsi_uninit(dev);
sxe2_dev_pci_map_uinit(dev);
@@ -296,13 +277,19 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = {
.dev_close = sxe2_dev_close,
.dev_infos_get = sxe2_dev_infos_get,
+ .rx_queue_start = sxe2_rx_queue_start,
+ .rx_queue_stop = sxe2_rx_queue_stop,
+ .tx_queue_start = sxe2_tx_queue_start,
+ .tx_queue_stop = sxe2_tx_queue_stop,
.rx_queue_setup = sxe2_rx_queue_setup,
- .tx_queue_setup = sxe2_tx_queue_setup,
.rx_queue_release = sxe2_rx_queue_release,
+ .tx_queue_setup = sxe2_tx_queue_setup,
.tx_queue_release = sxe2_tx_queue_release,
.rxq_info_get = sxe2_rx_queue_info_get,
.txq_info_get = sxe2_tx_queue_info_get,
+ .rx_burst_mode_get = sxe2_rx_burst_mode_get,
+ .tx_burst_mode_get = sxe2_tx_burst_mode_get,
};
struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
@@ -774,8 +761,6 @@ static int32_t sxe2_dev_init(struct rte_eth_dev *dev,
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
sxe2_rx_mode_func_set(dev);
sxe2_tx_mode_func_set(dev);
- if (ret != 0)
- PMD_LOG_ERR(INIT, "Failed to mp init (secondary), ret=%d", ret);
goto l_end;
}
diff --git a/drivers/net/sxe2/sxe2_queue.c b/drivers/net/sxe2/sxe2_queue.c
index 93f8236381..1786d6ea4f 100644
--- a/drivers/net/sxe2/sxe2_queue.c
+++ b/drivers/net/sxe2/sxe2_queue.c
@@ -5,6 +5,8 @@
#include "sxe2_ethdev.h"
#include "sxe2_queue.h"
#include "sxe2_common_log.h"
+#include "sxe2_tx.h"
+#include "sxe2_rx.h"
void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter,
struct sxe2_drv_queue_caps *q_caps)
@@ -36,3 +38,29 @@ int32_t sxe2_queues_init(struct rte_eth_dev *dev)
return ret;
}
+
+int32_t sxe2_queues_start(struct rte_eth_dev *dev)
+{
+ int32_t ret = 0;
+
+ ret = sxe2_txqs_all_start(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to start tx queue.");
+ goto l_end;
+ }
+
+ ret = sxe2_rxqs_all_start(dev);
+ if (ret) {
+ PMD_LOG_ERR(INIT, "Failed to start rx queue.");
+ sxe2_txqs_all_stop(dev);
+ }
+l_end:
+ return ret;
+}
+
+void sxe2_queues_release(struct rte_eth_dev *dev)
+{
+ sxe2_all_rxqs_release(dev);
+
+ sxe2_all_txqs_release(dev);
+}
diff --git a/drivers/net/sxe2/sxe2_queue.h b/drivers/net/sxe2/sxe2_queue.h
index e587e582fa..5195e2dd16 100644
--- a/drivers/net/sxe2/sxe2_queue.h
+++ b/drivers/net/sxe2/sxe2_queue.h
@@ -188,4 +188,8 @@ void sxe2_sw_queue_ctx_hw_cap_set(struct sxe2_adapter *adapter,
int32_t sxe2_queues_init(struct rte_eth_dev *dev);
+int32_t sxe2_queues_start(struct rte_eth_dev *dev);
+
+void sxe2_queues_release(struct rte_eth_dev *dev);
+
#endif /* __SXE2_QUEUE_H__ */
diff --git a/drivers/net/sxe2/sxe2_txrx.c b/drivers/net/sxe2/sxe2_txrx.c
index 2531b49a52..058bf86931 100644
--- a/drivers/net/sxe2/sxe2_txrx.c
+++ b/drivers/net/sxe2/sxe2_txrx.c
@@ -9,12 +9,11 @@
#include <rte_memzone.h>
#include <ethdev_driver.h>
#include <unistd.h>
-
#include "sxe2_txrx.h"
#include "sxe2_txrx_common.h"
+#include "sxe2_txrx_vec.h"
#include "sxe2_txrx_poll.h"
#include "sxe2_ethdev.h"
-
#include "sxe2_common_log.h"
#include "sxe2_osal.h"
#include "sxe2_cmd_chnl.h"
@@ -22,6 +21,30 @@
#include <rte_cpuflags.h>
#endif
+int32_t __rte_cold
+sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev,
+ uint32_t *batch_flags)
+{
+ struct sxe2_tx_queue *txq;
+ int32_t ret = 0;
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i];
+ if (txq == NULL) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+ if (txq->offloads != (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
+ txq->rs_thresh < SXE2_TX_PKTS_BURST_BATCH_NUM) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ }
+ *batch_flags = SXE2_TX_MODE_SIMPLE_BATCH;
+l_end:
+ return ret;
+}
static int32_t sxe2_tx_desciptor_status(void *tx_queue, uint16_t offset)
{
struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue;
@@ -32,27 +55,23 @@ static int32_t sxe2_tx_desciptor_status(void *tx_queue, uint16_t offset)
ret = -EINVAL;
goto l_end;
}
-
desc_idx = txq->next_use + offset;
- desc_idx = DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh);
+ desc_idx = SXE2_DIV_ROUND_UP(desc_idx, txq->rs_thresh) * (txq->rs_thresh);
if (desc_idx >= txq->ring_depth) {
desc_idx -= txq->ring_depth;
if (desc_idx >= txq->ring_depth)
desc_idx -= txq->ring_depth;
}
-
if (desc_idx == 0)
desc_idx = txq->rs_thresh - 1;
else
desc_idx -= 1;
-
if (rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE) ==
(txq->desc_ring[desc_idx].wb.dd &
rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_MASK)))
ret = RTE_ETH_TX_DESC_DONE;
else
ret = RTE_ETH_TX_DESC_FULL;
-
l_end:
return ret;
}
@@ -60,7 +79,6 @@ static int32_t sxe2_tx_desciptor_status(void *tx_queue, uint16_t offset)
static inline int32_t sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf)
{
struct rte_mbuf *m_seg = mbuf;
-
while (m_seg != NULL) {
if (m_seg->data_len == 0)
return -EINVAL;
@@ -68,6 +86,7 @@ static inline int32_t sxe2_tx_mbuf_empty_check(struct rte_mbuf *mbuf)
}
return 0;
+
}
uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
@@ -97,12 +116,10 @@ uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
rte_errno = -EINVAL;
goto l_end;
}
-
if (mbuf->pkt_len < SXE2_TX_MIN_PKT_LEN) {
rte_errno = -EINVAL;
goto l_end;
}
-
#ifdef RTE_ETHDEV_DEBUG_TX
ret = rte_validate_tx_offload(mbuf);
if (ret != 0) {
@@ -115,14 +132,12 @@ uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
rte_errno = -ret;
goto l_end;
}
-
ret = sxe2_tx_mbuf_empty_check(mbuf);
if (ret != 0) {
rte_errno = -ret;
goto l_end;
}
}
-
l_end:
return i;
}
@@ -131,16 +146,85 @@ void sxe2_tx_mode_func_set(struct rte_eth_dev *dev)
{
struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
uint32_t tx_mode_flags = 0;
+ int32_t ret;
+ uint32_t vec_flags;
+ uint32_t batch_flags;
PMD_INIT_FUNC_TRACE();
-
- dev->tx_pkt_prepare = sxe2_tx_pkts_prepare;
- dev->tx_pkt_burst = sxe2_tx_pkts;
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ ret = sxe2_tx_vec_support_check(dev, &vec_flags);
+ if (ret == 0 &&
+ (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)) {
+ tx_mode_flags = vec_flags;
+#ifdef RTE_ARCH_X86
+ if (((tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) == 0))
+ tx_mode_flags |= SXE2_TX_MODE_VEC_SSE;
+#endif
+ if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) {
+ ret = sxe2_tx_queues_vec_prepare(dev);
+ if (ret != 0)
+ tx_mode_flags &= (~SXE2_TX_MODE_VEC_SET_MASK);
+ }
+ }
+ ret = sxe2_tx_simple_batch_support_check(dev, &batch_flags);
+ if (ret == 0 && batch_flags == SXE2_TX_MODE_SIMPLE_BATCH)
+ tx_mode_flags |= SXE2_TX_MODE_SIMPLE_BATCH;
+ }
+ if (tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK) {
+ dev->tx_pkt_prepare = NULL;
+#ifdef RTE_ARCH_X86
+ if (tx_mode_flags & SXE2_TX_MODE_VEC_OFFLOAD) {
+ dev->tx_pkt_prepare = sxe2_tx_pkts_prepare;
+ dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse;
+ } else {
+ dev->tx_pkt_burst = sxe2_tx_pkts_vec_sse_simple;
+ }
+#endif
+ } else {
+ if (tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH) {
+ dev->tx_pkt_prepare = NULL;
+ dev->tx_pkt_burst = sxe2_tx_pkts_simple;
+ } else {
+ dev->tx_pkt_prepare = sxe2_tx_pkts_prepare;
+ dev->tx_pkt_burst = sxe2_tx_pkts;
+ }
+ }
adapter->q_ctxt.tx_mode_flags = tx_mode_flags;
PMD_LOG_DEBUG(TX, "Tx mode flags:0x%016x port_id:%u.",
tx_mode_flags, dev->data->port_id);
}
+static const struct {
+ eth_tx_burst_t tx_burst;
+ const char *info;
+} sxe2_tx_burst_infos[] = {
+ { sxe2_tx_pkts, "Scalar" },
+#ifdef RTE_ARCH_X86
+ { sxe2_tx_pkts_vec_sse, "Vector SSE" },
+ { sxe2_tx_pkts_vec_sse_simple, "Vector SSE Simple" },
+#endif
+};
+
+int32_t sxe2_tx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode)
+{
+ eth_tx_burst_t pkt_burst = dev->tx_pkt_burst;
+ int32_t ret = -EINVAL;
+ uint32_t i;
+ uint32_t size;
+
+ size = RTE_DIM(sxe2_tx_burst_infos);
+ for (i = 0; i < size; ++i) {
+ if (pkt_burst == sxe2_tx_burst_infos[i].tx_burst) {
+ snprintf(mode->info, sizeof(mode->info), "%s",
+ sxe2_tx_burst_infos[i].info);
+ ret = 0;
+ break;
+ }
+ }
+ return ret;
+}
+
static int32_t sxe2_rx_desciptor_status(void *rx_queue, uint16_t offset)
{
struct sxe2_rx_queue *rxq = (struct sxe2_rx_queue *)rx_queue;
@@ -151,22 +235,18 @@ static int32_t sxe2_rx_desciptor_status(void *rx_queue, uint16_t offset)
ret = -EINVAL;
goto l_end;
}
-
if (offset >= rxq->ring_depth - rxq->hold_num) {
ret = RTE_ETH_RX_DESC_UNAVAIL;
goto l_end;
}
-
if (rxq->processing_idx + offset >= rxq->ring_depth)
desc = &rxq->desc_ring[rxq->processing_idx + offset - rxq->ring_depth];
else
desc = &rxq->desc_ring[rxq->processing_idx + offset];
-
if (rte_le_to_cpu_64(desc->wb.status_err_ptype_len) & SXE2_RX_DESC_STATUS_DD_MASK)
ret = RTE_ETH_RX_DESC_DONE;
else
ret = RTE_ETH_RX_DESC_AVAIL;
-
l_end:
PMD_LOG_DEBUG(RX, "Rx queue desc[%u] status:%d queue_id:%u port_id:%u",
offset, ret, rxq->queue_id, rxq->port_id);
@@ -189,55 +269,86 @@ static int32_t sxe2_rx_queue_count(void *rx_queue)
else
desc += SXE2_RX_QUEUE_CHECK_INTERVAL_NUM;
}
-
PMD_LOG_DEBUG(RX, "Rx queue done desc count:%u queue_id:%u port_id:%u",
done_num, rxq->queue_id, rxq->port_id);
-
return done_num;
}
-static bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, uint64_t offload)
-{
- struct sxe2_rx_queue *rxq;
- bool en = false;
- uint16_t i;
-
- for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i];
- if (rxq == NULL)
- continue;
-
- if (0 != (rxq->offloads & offload)) {
- en = true;
- goto l_end;
- }
- }
-
-l_end:
- return en;
-}
-
void sxe2_rx_mode_func_set(struct rte_eth_dev *dev)
{
struct sxe2_adapter *adapter = SXE2_DEV_PRIVATE_TO_ADAPTER(dev);
uint32_t rx_mode_flags = 0;
+ int32_t ret;
+ uint32_t vec_flags;
PMD_INIT_FUNC_TRACE();
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ ret = sxe2_rx_vec_support_check(dev, &vec_flags);
+ if (ret == 0 &&
+ rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+ rx_mode_flags = vec_flags;
+#ifdef RTE_ARCH_X86
+ if (((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) == 0) &&
+ rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128)
+ rx_mode_flags |= SXE2_RX_MODE_VEC_SSE;
+#endif
+ if ((rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) != 0) {
+ ret = sxe2_rx_queues_vec_prepare(dev);
+ if (ret != 0)
+ rx_mode_flags &= (~SXE2_RX_MODE_VEC_SET_MASK);
+ }
+ }
+ }
+#ifdef RTE_ARCH_X86
+ if (rx_mode_flags & SXE2_RX_MODE_VEC_SET_MASK) {
+ dev->rx_pkt_burst = sxe2_rx_pkts_scattered_vec_sse_offload;
+ goto l_end;
+ }
+#endif
if (sxe2_rx_offload_en_check(dev, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT))
dev->rx_pkt_burst = sxe2_rx_pkts_scattered_split;
else
dev->rx_pkt_burst = sxe2_rx_pkts_scattered;
-
+ goto l_end;
+l_end:
PMD_LOG_DEBUG(RX, "Rx mode flags:0x%016x port_id:%u.",
rx_mode_flags, dev->data->port_id);
adapter->q_ctxt.rx_mode_flags = rx_mode_flags;
}
+static const struct {
+ eth_rx_burst_t rx_burst;
+ const char *info;
+} sxe2_rx_burst_infos[] = {
+ { sxe2_rx_pkts_scattered, "Scalar Scattered" },
+ { sxe2_rx_pkts_scattered_split, "Scalar Scattered split" },
+#ifdef RTE_ARCH_X86
+ { sxe2_rx_pkts_scattered_vec_sse_offload, "Vector SSE Scattered" },
+#endif
+};
+
+int32_t sxe2_rx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode)
+{
+ eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+ int32_t ret = -EINVAL;
+ uint32_t i, size;
+ size = RTE_DIM(sxe2_rx_burst_infos);
+ for (i = 0; i < size; ++i) {
+ if (pkt_burst == sxe2_rx_burst_infos[i].rx_burst) {
+ snprintf(mode->info, sizeof(mode->info), "%s",
+ sxe2_rx_burst_infos[i].info);
+ ret = 0;
+ break;
+ }
+ }
+ return ret;
+}
+
void sxe2_set_common_function(struct rte_eth_dev *dev)
{
PMD_INIT_FUNC_TRACE();
-
dev->rx_queue_count = sxe2_rx_queue_count;
dev->rx_descriptor_status = sxe2_rx_desciptor_status;
diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h
index f6558e2189..61c6641e49 100644
--- a/drivers/net/sxe2/sxe2_txrx.h
+++ b/drivers/net/sxe2/sxe2_txrx.h
@@ -6,16 +6,17 @@
#define SXE2_TXRX_H
#include <ethdev_driver.h>
#include "sxe2_queue.h"
-
void sxe2_set_common_function(struct rte_eth_dev *dev);
+int32_t __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev,
+ uint32_t *batch_flags);
uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
-
void sxe2_tx_mode_func_set(struct rte_eth_dev *dev);
-
void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq);
-
void sxe2_rx_mode_func_set(struct rte_eth_dev *dev);
-
+int32_t sxe2_tx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode);
+int32_t sxe2_rx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id, struct rte_eth_burst_mode *mode);
#endif /* __SXE2_TXRX_H__ */
diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c
index 6d37fdef36..dc6a83e380 100644
--- a/drivers/net/sxe2/sxe2_txrx_poll.c
+++ b/drivers/net/sxe2/sxe2_txrx_poll.c
@@ -125,7 +125,7 @@ sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt)
uint16_t count = 0;
while (m_seg != NULL) {
- count += DIV_ROUND_UP(m_seg->data_len,
+ count += SXE2_DIV_ROUND_UP(m_seg->data_len,
SXE2_TX_MAX_DATA_NUM_PER_DESC);
m_seg = m_seg->next;
}
@@ -369,11 +369,12 @@ uint16_t sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt
desc->read.type_cmd_off_bsz_l2t |=
rte_cpu_to_le_64(((uint64_t)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT);
}
+ goto l_end_of_tx;
l_exit_logic:
if (tx_num == 0)
goto l_end;
- goto l_end_of_tx;
+
l_end_of_tx:
SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use);
PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u",
@@ -483,6 +484,32 @@ static inline uint16_t sxe2_tx_pkts_batch(void *tx_queue,
return nb_pkts;
}
+uint16_t sxe2_tx_pkts_simple(void *tx_queue,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ uint16_t tx_done_num;
+ uint16_t tx_once_num;
+ uint16_t tx_need_num;
+ if (likely(nb_pkts <= SXE2_TX_PKTS_BURST_BATCH_NUM)) {
+ tx_done_num = sxe2_tx_pkts_batch(tx_queue,
+ tx_pkts, nb_pkts);
+ goto l_end;
+ }
+ tx_done_num = 0;
+ while (nb_pkts) {
+ tx_need_num = RTE_MIN(nb_pkts, SXE2_TX_PKTS_BURST_BATCH_NUM);
+ tx_once_num = sxe2_tx_pkts_batch(tx_queue,
+ &tx_pkts[tx_done_num],
+ tx_need_num);
+ nb_pkts -= tx_once_num;
+ tx_done_num += tx_once_num;
+ if (tx_once_num < tx_need_num)
+ break;
+ }
+l_end:
+ return tx_done_num;
+}
+
static inline void
sxe2_update_rx_tail(struct sxe2_rx_queue *rxq, uint16_t hold_num, uint16_t rx_id)
{
diff --git a/drivers/net/sxe2/sxe2_txrx_poll.h b/drivers/net/sxe2/sxe2_txrx_poll.h
index f45e33f9b7..6bb2238a2f 100644
--- a/drivers/net/sxe2/sxe2_txrx_poll.h
+++ b/drivers/net/sxe2/sxe2_txrx_poll.h
@@ -9,6 +9,10 @@
uint16_t sxe2_tx_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t sxe2_tx_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
+uint16_t sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
uint16_t sxe2_rx_pkts_scattered(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
uint16_t sxe2_rx_pkts_scattered_split(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
diff --git a/drivers/net/sxe2/sxe2_txrx_vec.c b/drivers/net/sxe2/sxe2_txrx_vec.c
new file mode 100644
index 0000000000..1e03b53d67
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_vec.c
@@ -0,0 +1,200 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include "sxe2_txrx_vec.h"
+#include "sxe2_txrx_vec_common.h"
+#include "sxe2_queue.h"
+#include "sxe2_ethdev.h"
+#include "sxe2_common_log.h"
+
+int32_t __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, uint32_t *vec_flags)
+{
+ struct sxe2_rx_queue *rxq;
+ int32_t ret = 0;
+ uint16_t i;
+
+ *vec_flags = SXE2_RX_MODE_VEC_SIMPLE;
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i];
+ if (rxq == NULL) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+ if (!rte_is_power_of_2(rxq->ring_depth)) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ if (rxq->rx_free_thresh < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC &&
+ (rxq->ring_depth % rxq->rx_free_thresh) != 0) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ if ((rxq->offloads & SXE2_RX_VEC_NO_SUPPORT_OFFLOAD) != 0) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ if ((rxq->offloads & SXE2_RX_VEC_SUPPORT_OFFLOAD) != 0)
+ *vec_flags = SXE2_RX_MODE_VEC_OFFLOAD;
+ }
+l_end:
+ return ret;
+}
+
+bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, uint64_t offload)
+{
+ struct sxe2_rx_queue *rxq;
+ bool en = false;
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i];
+ if (rxq == NULL)
+ continue;
+ if ((rxq->offloads & offload) != 0) {
+ en = true;
+ goto l_end;
+ }
+ }
+l_end:
+ return en;
+}
+
+static inline void sxe2_rx_queue_mbufs_release_vec(struct sxe2_rx_queue *rxq)
+{
+ const uint16_t mask = rxq->ring_depth - 1;
+ uint16_t i;
+
+ if (unlikely(!rxq->buffer_ring)) {
+ PMD_LOG_DEBUG(RX, "Rx queue release mbufs vec, buffer_ring if NULL."
+ "port_id:%u queue_id:%u", rxq->port_id, rxq->queue_id);
+ return;
+ }
+ if (rxq->realloc_num >= rxq->ring_depth)
+ return;
+ if (rxq->realloc_num == 0) {
+ for (i = 0; i < rxq->ring_depth; ++i) {
+ if (rxq->buffer_ring[i]) {
+ rte_pktmbuf_free_seg(rxq->buffer_ring[i]);
+ rxq->buffer_ring[i] = NULL;
+ }
+ }
+ } else {
+ for (i = rxq->processing_idx;
+ i != rxq->realloc_start;
+ i = (i + 1) & mask) {
+ if (rxq->buffer_ring[i]) {
+ rte_pktmbuf_free_seg(rxq->buffer_ring[i]);
+ rxq->buffer_ring[i] = NULL;
+ }
+ }
+ }
+ rxq->realloc_num = rxq->ring_depth;
+ memset(rxq->buffer_ring, 0, rxq->ring_depth * sizeof(rxq->buffer_ring[0]));
+}
+
+static inline void sxe2_rx_queue_vec_init(struct sxe2_rx_queue *rxq)
+{
+ uintptr_t data;
+ struct rte_mbuf mbuf_def;
+
+ memset(&mbuf_def, 0, sizeof(mbuf_def));
+ mbuf_def.buf_addr = 0;
+ mbuf_def.nb_segs = 1;
+ mbuf_def.data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf_def.port = rxq->port_id;
+ rte_mbuf_refcnt_set(&mbuf_def, 1);
+ rte_compiler_barrier();
+ data = (uintptr_t)&mbuf_def.rearm_data;
+ rxq->mbuf_init_value = *(uint64_t *)data;
+}
+
+int32_t __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev)
+{
+ struct sxe2_rx_queue *rxq = NULL;
+ int32_t ret = 0;
+ uint16_t i;
+ for (i = 0; i < dev->data->nb_rx_queues; ++i) {
+ rxq = (struct sxe2_rx_queue *)dev->data->rx_queues[i];
+ if (rxq == NULL) {
+ PMD_LOG_INFO(RX, "Failed to prepare rx queue, rxq[%d] is NULL", i);
+ continue;
+ }
+ rxq->ops.mbufs_release = sxe2_rx_queue_mbufs_release_vec;
+ sxe2_rx_queue_vec_init(rxq);
+ }
+ return ret;
+}
+
+int32_t __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, uint32_t *vec_flags)
+{
+ struct sxe2_tx_queue *txq;
+ int32_t ret = 0;
+ uint32_t i;
+ *vec_flags = SXE2_TX_MODE_VEC_SIMPLE;
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ txq = (struct sxe2_tx_queue *)dev->data->tx_queues[i];
+ if (txq == NULL) {
+ ret = -EINVAL;
+ goto l_end;
+ }
+ if (txq->rs_thresh < SXE2_TX_RS_THRESH_MIN_VEC ||
+ txq->rs_thresh > SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ if ((txq->offloads & SXE2_TX_VEC_NO_SUPPORT_OFFLOAD) != 0) {
+ ret = -ENOTSUP;
+ goto l_end;
+ }
+ if ((txq->offloads & SXE2_TX_VEC_SUPPORT_OFFLOAD) != 0)
+ *vec_flags = SXE2_TX_MODE_VEC_OFFLOAD;
+ }
+l_end:
+ return ret;
+}
+
+static void sxe2_tx_queue_mbufs_release_vec(struct sxe2_tx_queue *txq)
+{
+ struct sxe2_tx_buffer *buffer;
+ uint16_t i;
+
+ if (unlikely(txq == NULL || txq->buffer_ring == NULL)) {
+ PMD_LOG_ERR(TX, "Tx release mbufs vec, invalid params.");
+ return;
+ }
+ i = txq->next_dd - (txq->rs_thresh - 1);
+ buffer = txq->buffer_ring;
+ if (txq->next_use < i) {
+ for ( ; i < txq->ring_depth; ++i) {
+ if (buffer[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(buffer[i].mbuf);
+ buffer[i].mbuf = NULL;
+ }
+ }
+ i = 0;
+ }
+ for (; i < txq->next_use; ++i) {
+ if (buffer[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(buffer[i].mbuf);
+ buffer[i].mbuf = NULL;
+ }
+ }
+}
+
+int32_t __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev)
+{
+ struct sxe2_tx_queue *txq = NULL;
+ int32_t ret = 0;
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_tx_queues; ++i) {
+ txq = dev->data->tx_queues[i];
+ if (txq == NULL) {
+ PMD_LOG_INFO(TX, "Failed to prepare tx queue, txq[%d] is NULL", i);
+ continue;
+ }
+ txq->ops.mbufs_release = sxe2_tx_queue_mbufs_release_vec;
+ }
+ return ret;
+}
diff --git a/drivers/net/sxe2/sxe2_txrx_vec.h b/drivers/net/sxe2/sxe2_txrx_vec.h
new file mode 100644
index 0000000000..ff2f2f1d5c
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_vec.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef _SXE2_TXRX_VEC_H_
+#define _SXE2_TXRX_VEC_H_
+#include <ethdev_driver.h>
+#include "sxe2_queue.h"
+
+#define SXE2_RX_MODE_VEC_SIMPLE RTE_BIT32(0)
+#define SXE2_RX_MODE_VEC_OFFLOAD RTE_BIT32(1)
+#define SXE2_RX_MODE_VEC_SSE RTE_BIT32(2)
+#define SXE2_RX_MODE_VEC_AVX2 RTE_BIT32(3)
+#define SXE2_RX_MODE_VEC_AVX512 RTE_BIT32(4)
+#define SXE2_RX_MODE_VEC_NEON RTE_BIT32(5)
+#define SXE2_RX_MODE_BATCH_ALLOC RTE_BIT32(10)
+#define SXE2_RX_MODE_VEC_SET_MASK (SXE2_RX_MODE_VEC_SIMPLE | \
+ SXE2_RX_MODE_VEC_OFFLOAD | SXE2_RX_MODE_VEC_SSE | \
+ SXE2_RX_MODE_VEC_AVX2 | SXE2_RX_MODE_VEC_AVX512 | \
+ SXE2_RX_MODE_VEC_NEON)
+#define SXE2_TX_MODE_VEC_SIMPLE RTE_BIT32(0)
+#define SXE2_TX_MODE_VEC_OFFLOAD RTE_BIT32(1)
+#define SXE2_TX_MODE_VEC_SSE RTE_BIT32(2)
+#define SXE2_TX_MODE_VEC_AVX2 RTE_BIT32(3)
+#define SXE2_TX_MODE_VEC_AVX512 RTE_BIT32(4)
+#define SXE2_TX_MODE_VEC_NEON RTE_BIT32(5)
+#define SXE2_TX_MODE_SIMPLE_BATCH RTE_BIT32(10)
+#define SXE2_TX_MODE_VEC_SET_MASK (SXE2_TX_MODE_VEC_SIMPLE | \
+ SXE2_TX_MODE_VEC_OFFLOAD | SXE2_TX_MODE_VEC_SSE | \
+ SXE2_TX_MODE_VEC_AVX2 | SXE2_TX_MODE_VEC_AVX512 | \
+ SXE2_TX_MODE_VEC_NEON)
+#define SXE2_TX_VEC_NO_SUPPORT_OFFLOAD ( \
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+ RTE_ETH_TX_OFFLOAD_SECURITY | \
+ RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
+#define SXE2_TX_VEC_SUPPORT_OFFLOAD ( \
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
+#define SXE2_RX_VEC_NO_SUPPORT_OFFLOAD ( \
+ RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
+ RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | \
+ RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SECURITY | \
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define SXE2_RX_VEC_SUPPORT_OFFLOAD ( \
+ RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+ RTE_ETH_RX_OFFLOAD_RSS_HASH)
+#ifdef RTE_ARCH_X86
+uint16_t sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t sxe2_tx_pkts_vec_sse_simple(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+#endif
+int32_t __rte_cold sxe2_tx_vec_support_check(struct rte_eth_dev *dev, uint32_t *vec_flags);
+int32_t __rte_cold sxe2_tx_queues_vec_prepare(struct rte_eth_dev *dev);
+int32_t __rte_cold sxe2_rx_vec_support_check(struct rte_eth_dev *dev, uint32_t *vec_flags);
+bool __rte_cold sxe2_rx_offload_en_check(struct rte_eth_dev *dev, uint64_t offload);
+int32_t __rte_cold sxe2_rx_queues_vec_prepare(struct rte_eth_dev *dev);
+
+#endif /* __SXE2_TXRX_VEC_H__ */
diff --git a/drivers/net/sxe2/sxe2_txrx_vec_common.h b/drivers/net/sxe2/sxe2_txrx_vec_common.h
new file mode 100644
index 0000000000..1ce687e09f
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_vec_common.h
@@ -0,0 +1,235 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#ifndef __SXE2_TXRX_VEC_COMMON_H__
+#define __SXE2_TXRX_VEC_COMMON_H__
+#include <rte_atomic.h>
+#ifdef PCLINT
+#include "avx_stub.h"
+#endif
+#include "sxe2_rx.h"
+#include "sxe2_queue.h"
+#include "sxe2_tx.h"
+#include "sxe2_vsi.h"
+#include "sxe2_ethdev.h"
+#define SXE2_RX_NUM_PER_LOOP_SSE 4
+#define SXE2_RX_NUM_PER_LOOP_AVX 8
+#define SXE2_RX_NUM_PER_LOOP_NEON 4
+#define SXE2_RX_REARM_THRESH_VEC 64
+#define SXE2_RX_PKTS_BURST_BATCH_NUM_VEC 32
+#define SXE2_TX_RS_THRESH_MIN_VEC 32
+#define SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC 64
+
+static __rte_always_inline void
+sxe2_tx_pkts_mbuf_fill(struct sxe2_tx_buffer *buffer,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ uint16_t i;
+ for (i = 0; i < nb_pkts; ++i)
+ buffer[i].mbuf = tx_pkts[i];
+}
+
+static __rte_always_inline int32_t
+sxe2_tx_bufs_free_vec(struct sxe2_tx_queue *txq)
+{
+ struct sxe2_tx_buffer *buffer;
+ struct rte_mbuf *mbuf;
+ struct rte_mbuf *mbuf_free_arr[SXE2_TX_FREE_BUFFER_SIZE_MAX_VEC];
+ int32_t ret;
+ uint32_t i;
+ uint16_t rs_thresh;
+ uint16_t free_num;
+ if ((txq->desc_ring[txq->next_dd].wb.dd &
+ rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_MASK)) !=
+ rte_cpu_to_le_64(SXE2_TX_DESC_DTYPE_DESC_DONE)) {
+ ret = 0;
+ goto l_end;
+ }
+ rs_thresh = txq->rs_thresh;
+ buffer = &txq->buffer_ring[txq->next_dd - (rs_thresh - 1)];
+ mbuf = rte_pktmbuf_prefree_seg(buffer[0].mbuf);
+ if (likely(mbuf)) {
+ mbuf_free_arr[0] = mbuf;
+ free_num = 1;
+ for (i = 1; i < rs_thresh; ++i) {
+ mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf);
+ if (likely(mbuf)) {
+ if (likely(mbuf->pool == mbuf_free_arr[0]->pool)) {
+ mbuf_free_arr[free_num] = mbuf;
+ free_num++;
+ } else {
+ rte_mempool_put_bulk(mbuf_free_arr[0]->pool,
+ (void *)mbuf_free_arr, free_num);
+ mbuf_free_arr[0] = mbuf;
+ free_num = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(mbuf_free_arr[0]->pool,
+ (void *)mbuf_free_arr, free_num);
+ } else {
+ for (i = 1; i < rs_thresh; ++i) {
+ mbuf = rte_pktmbuf_prefree_seg(buffer[i].mbuf);
+ if (mbuf != NULL)
+ rte_mempool_put(mbuf->pool, mbuf);
+ }
+ }
+ txq->desc_free_num += rs_thresh;
+ txq->next_dd += rs_thresh;
+ if (txq->next_dd >= txq->ring_depth)
+ txq->next_dd = rs_thresh - 1;
+ ret = rs_thresh;
+l_end:
+ return ret;
+}
+
+static inline void
+sxe2_tx_desc_fill_offloads(struct rte_mbuf *mbuf, uint64_t *desc_qw1)
+{
+ uint64_t offloads = mbuf->ol_flags;
+ uint32_t desc_cmd = 0;
+ uint32_t desc_offset = 0;
+ if (offloads & RTE_MBUF_F_TX_IP_CKSUM) {
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4_CSUM;
+ desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len);
+ } else if (offloads & RTE_MBUF_F_TX_IPV4) {
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV4;
+ desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len);
+ } else if (offloads & RTE_MBUF_F_TX_IPV6) {
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_IIPT_IPV6;
+ desc_offset |= SXE2_TX_DATA_DESC_IPLEN_VAL(mbuf->l3_len);
+ }
+ switch (offloads & RTE_MBUF_F_TX_L4_MASK) {
+ case RTE_MBUF_F_TX_TCP_CKSUM:
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_TCP;
+ desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len);
+ break;
+ case RTE_MBUF_F_TX_SCTP_CKSUM:
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_SCTP;
+ desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len);
+ break;
+ case RTE_MBUF_F_TX_UDP_CKSUM:
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_L4T_EOFT_UDP;
+ desc_offset |= SXE2_TX_DATA_DESC_L4LEN_VAL(mbuf->l4_len);
+ break;
+ default:
+ break;
+ }
+ *desc_qw1 |= ((uint64_t)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT;
+ if (offloads & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
+ desc_cmd |= SXE2_TX_DATA_DESC_CMD_IL2TAG1;
+ *desc_qw1 |= ((uint64_t)mbuf->vlan_tci) << SXE2_TX_DATA_DESC_L2TAG1_SHIFT;
+ }
+ *desc_qw1 |= ((uint64_t)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT;
+}
+#define SXE2_RX_UMBCAST_FLAGS_VAL_GET(_flags) \
+ (((_flags) & 0x30) >> 4)
+
+static inline void sxe2_vf_rx_vec_sw_stats_cnt(struct sxe2_rx_queue *rxq,
+ struct rte_mbuf *mbuf, uint8_t umbcast_flag)
+{
+ if (rxq->vsi->adapter->devargs.sw_stats_en) {
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.bytes,
+ mbuf->pkt_len + RTE_ETHER_CRC_LEN, rte_memory_order_relaxed);
+ switch (SXE2_RX_UMBCAST_FLAGS_VAL_GET(umbcast_flag)) {
+ case SXE2_RX_DESC_STATUS_UNICAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.unicast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ case SXE2_RX_DESC_STATUS_MUTICAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.multicast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ case SXE2_RX_DESC_STATUS_BOARDCAST:
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.broadcast_pkts, 1,
+ rte_memory_order_relaxed);
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+static inline uint16_t
+sxe2_rx_pkts_refactor(struct sxe2_rx_queue *rxq,
+ struct rte_mbuf **mbuf_bufs, uint16_t mbuf_num,
+ uint8_t *split_rxe_flags, uint8_t *umbcast_flags)
+{
+ struct rte_mbuf *done_pkts[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0};
+ struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+ struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+ struct rte_mbuf *tmp_seg;
+ uint16_t done_num, buf_idx;
+ done_num = 0;
+ for (buf_idx = 0; buf_idx < mbuf_num; buf_idx++) {
+ if (last_seg) {
+ last_seg->next = mbuf_bufs[buf_idx];
+ mbuf_bufs[buf_idx]->data_len += rxq->crc_len;
+ first_seg->nb_segs++;
+ first_seg->pkt_len += mbuf_bufs[buf_idx]->data_len;
+ last_seg = last_seg->next;
+ if (split_rxe_flags[buf_idx] == 0) {
+ first_seg->hash = last_seg->hash;
+ first_seg->vlan_tci = last_seg->vlan_tci;
+ first_seg->ol_flags = last_seg->ol_flags;
+ first_seg->pkt_len -= rxq->crc_len;
+ if (last_seg->data_len > rxq->crc_len) {
+ last_seg->data_len -= rxq->crc_len;
+ } else {
+ tmp_seg = first_seg;
+ first_seg->nb_segs--;
+ while (tmp_seg->next != last_seg)
+ tmp_seg = tmp_seg->next;
+ tmp_seg->data_len -= (rxq->crc_len - last_seg->data_len);
+ tmp_seg->next = NULL;
+ rte_pktmbuf_free_seg(last_seg);
+ last_seg = NULL;
+ }
+ done_pkts[done_num++] = first_seg;
+ sxe2_vf_rx_vec_sw_stats_cnt(rxq, first_seg, umbcast_flags[buf_idx]);
+ first_seg = NULL;
+ last_seg = NULL;
+ } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) {
+ continue;
+ } else {
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes,
+ first_seg->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN,
+ rte_memory_order_relaxed);
+ rte_pktmbuf_free(first_seg);
+ first_seg = NULL;
+ last_seg = NULL;
+ continue;
+ }
+ } else {
+ if (split_rxe_flags[buf_idx] == 0) {
+ done_pkts[done_num++] = mbuf_bufs[buf_idx];
+ sxe2_vf_rx_vec_sw_stats_cnt(rxq, mbuf_bufs[buf_idx],
+ umbcast_flags[buf_idx]);
+ continue;
+ } else if (split_rxe_flags[buf_idx] & SXE2_RX_DESC_STATUS_EOP_MASK) {
+ first_seg = mbuf_bufs[buf_idx];
+ last_seg = first_seg;
+ mbuf_bufs[buf_idx]->data_len += rxq->crc_len;
+ mbuf_bufs[buf_idx]->pkt_len += rxq->crc_len;
+ } else {
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_pkts, 1,
+ rte_memory_order_relaxed);
+ rte_atomic_fetch_add_explicit(&rxq->sw_stats.drop_bytes,
+ mbuf_bufs[buf_idx]->pkt_len - rxq->crc_len + RTE_ETHER_CRC_LEN,
+ rte_memory_order_relaxed);
+ rte_pktmbuf_free_seg(mbuf_bufs[buf_idx]);
+ continue;
+ }
+ }
+ }
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
+ rte_memcpy(mbuf_bufs, done_pkts, done_num * (sizeof(struct rte_mbuf *)));
+ return done_num;
+}
+#endif /* __SXE2_TXRX_VEC_COMMON_H__ */
diff --git a/drivers/net/sxe2/sxe2_txrx_vec_sse.c b/drivers/net/sxe2/sxe2_txrx_vec_sse.c
new file mode 100644
index 0000000000..f6e3f45937
--- /dev/null
+++ b/drivers/net/sxe2/sxe2_txrx_vec_sse.c
@@ -0,0 +1,549 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C), 2025, Wuxi Stars Micro System Technologies Co., Ltd.
+ */
+
+#include <ethdev_driver.h>
+#include <rte_bitops.h>
+#include <rte_malloc.h>
+#include <rte_mempool.h>
+#include <rte_vect.h>
+#include "rte_common.h"
+#include "sxe2_ethdev.h"
+#include "sxe2_common_log.h"
+#include "sxe2_queue.h"
+#include "sxe2_txrx_vec.h"
+#include "sxe2_txrx_vec_common.h"
+#include "sxe2_vsi.h"
+
+static __rte_always_inline void
+sxe2_tx_desc_fill_one_sse(volatile union sxe2_tx_data_desc *desc,
+ struct rte_mbuf *pkt,
+ uint64_t desc_cmd, bool with_offloads)
+{
+ __m128i data_desc;
+ uint64_t desc_qw1;
+ uint32_t desc_offset;
+ desc_qw1 = (SXE2_TX_DESC_DTYPE_DATA |
+ ((uint64_t)desc_cmd) << SXE2_TX_DATA_DESC_CMD_SHIFT |
+ ((uint64_t)pkt->data_len) << SXE2_TX_DATA_DESC_BUF_SZ_SHIFT);
+ desc_offset = SXE2_TX_DATA_DESC_MACLEN_VAL(pkt->l2_len);
+ desc_qw1 |= ((uint64_t)desc_offset) << SXE2_TX_DATA_DESC_OFFSET_SHIFT;
+ if (with_offloads)
+ sxe2_tx_desc_fill_offloads(pkt, &desc_qw1);
+ data_desc = _mm_set_epi64x(desc_qw1, rte_pktmbuf_iova(pkt));
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, desc), data_desc);
+}
+
+static __rte_always_inline uint16_t
+sxe2_tx_pkts_vec_sse_batch(struct sxe2_tx_queue *txq,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts, bool with_offloads)
+{
+ volatile union sxe2_tx_data_desc *desc;
+ struct sxe2_tx_buffer *buffer;
+ uint16_t next_use;
+ uint16_t res_num;
+ uint16_t tx_num;
+ uint16_t i;
+ if (txq->desc_free_num < txq->free_thresh)
+ (void)sxe2_tx_bufs_free_vec(txq);
+ nb_pkts = RTE_MIN(txq->desc_free_num, nb_pkts);
+ if (unlikely(nb_pkts == 0)) {
+ PMD_LOG_DEBUG(TX, "Tx pkts sse batch: may not enough free desc, "
+ "free_desc=%u, need_tx_pkts=%u",
+ txq->desc_free_num, nb_pkts);
+ goto l_end;
+ }
+ tx_num = nb_pkts;
+ next_use = txq->next_use;
+ desc = &txq->desc_ring[next_use];
+ buffer = &txq->buffer_ring[next_use];
+ txq->desc_free_num -= nb_pkts;
+ res_num = txq->ring_depth - txq->next_use;
+ if (tx_num >= res_num) {
+ sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, res_num);
+ for (i = 0; i < res_num - 1; ++i, ++tx_pkts, ++desc) {
+ sxe2_tx_desc_fill_one_sse(desc, *tx_pkts,
+ SXE2_TX_DATA_DESC_CMD_EOP,
+ with_offloads);
+ }
+ sxe2_tx_desc_fill_one_sse(desc, *tx_pkts++,
+ (SXE2_TX_DATA_DESC_CMD_EOP | SXE2_TX_DATA_DESC_CMD_RS),
+ with_offloads);
+ tx_num -= res_num;
+ next_use = 0;
+ txq->next_rs = txq->rs_thresh - 1;
+ desc = &txq->desc_ring[next_use];
+ buffer = &txq->buffer_ring[next_use];
+ }
+ sxe2_tx_pkts_mbuf_fill(buffer, tx_pkts, tx_num);
+ for (i = 0; i < tx_num; ++i, ++tx_pkts, ++desc) {
+ sxe2_tx_desc_fill_one_sse(desc, *tx_pkts,
+ SXE2_TX_DATA_DESC_CMD_EOP,
+ with_offloads);
+ }
+ next_use += tx_num;
+ if (next_use > txq->next_rs) {
+ txq->desc_ring[txq->next_rs].read.type_cmd_off_bsz_l2t |=
+ rte_cpu_to_le_64(SXE2_TX_DATA_DESC_CMD_RS_MASK);
+ txq->next_rs += txq->rs_thresh;
+ }
+ txq->next_use = next_use;
+ SXE2_PCI_REG_WRITE_WC(txq->tdt_reg_addr, next_use);
+ PMD_LOG_DEBUG(TX, "port_id=%u queue_id=%u next_use=%u send_pkts=%u",
+ txq->port_id, txq->queue_id, next_use, nb_pkts);
+l_end:
+ return nb_pkts;
+}
+
+static __rte_always_inline uint16_t
+sxe2_tx_pkts_vec_sse_common(struct sxe2_tx_queue *txq,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts, bool with_offloads)
+{
+ uint16_t tx_done_num = 0;
+ uint16_t tx_once_num;
+ uint16_t tx_need_num;
+ while (nb_pkts) {
+ tx_need_num = RTE_MIN(nb_pkts, txq->rs_thresh);
+ tx_once_num = sxe2_tx_pkts_vec_sse_batch(txq,
+ tx_pkts + tx_done_num,
+ tx_need_num, with_offloads);
+ nb_pkts -= tx_once_num;
+ tx_done_num += tx_once_num;
+ if (tx_once_num < tx_need_num)
+ break;
+ }
+ return tx_done_num;
+}
+
+uint16_t sxe2_tx_pkts_vec_sse_simple(void *tx_queue,
+ struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue,
+ tx_pkts, nb_pkts, false);
+}
+uint16_t sxe2_tx_pkts_vec_sse(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ return sxe2_tx_pkts_vec_sse_common((struct sxe2_tx_queue *)tx_queue,
+ tx_pkts, nb_pkts, true);
+}
+
+static inline void sxe2_rx_queue_rearm_sse(struct sxe2_rx_queue *rxq)
+{
+ volatile union sxe2_rx_desc *desc;
+ struct rte_mbuf **buffer;
+ struct rte_mbuf *mbuf0, *mbuf1;
+ __m128i dma_addr0, dma_addr1;
+ __m128i virt_addr0, virt_addr1;
+ __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM,
+ RTE_PKTMBUF_HEADROOM);
+ int32_t ret;
+ uint16_t i;
+ uint16_t new_tail;
+
+ buffer = &rxq->buffer_ring[rxq->realloc_start];
+ desc = &rxq->desc_ring[rxq->realloc_start];
+ ret = rte_mempool_get_bulk(rxq->mb_pool, (void *)buffer,
+ SXE2_RX_REARM_THRESH_VEC);
+ if (ret != 0) {
+ PMD_LOG_INFO(RX, "Rx mbuf vec alloc failed port_id=%u "
+ "queue_id=%u", rxq->port_id, rxq->queue_id);
+ if ((rxq->realloc_num + SXE2_RX_REARM_THRESH_VEC) >= rxq->ring_depth) {
+ dma_addr0 = _mm_setzero_si128();
+ for (i = 0; i < SXE2_RX_NUM_PER_LOOP_SSE; ++i) {
+ buffer[i] = &rxq->fake_mbuf;
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc[i].read),
+ dma_addr0);
+ }
+ }
+ rxq->vsi->adapter->dev_info.dev_data->rx_mbuf_alloc_failed +=
+ SXE2_RX_REARM_THRESH_VEC;
+ goto l_end;
+ }
+ for (i = 0; i < SXE2_RX_REARM_THRESH_VEC; i += 2, buffer += 2) {
+ mbuf0 = buffer[0];
+ mbuf1 = buffer[1];
+#if RTE_IOVA_IN_MBUF
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
+ offsetof(struct rte_mbuf, buf_addr) + 8);
+#endif
+ virt_addr0 = _mm_loadu_si128((__m128i *)&mbuf0->buf_addr);
+ virt_addr1 = _mm_loadu_si128((__m128i *)&mbuf1->buf_addr);
+#if RTE_IOVA_IN_MBUF
+ dma_addr0 = _mm_unpackhi_epi64(virt_addr0, virt_addr0);
+ dma_addr1 = _mm_unpackhi_epi64(virt_addr1, virt_addr1);
+#else
+ dma_addr0 = _mm_unpacklo_epi64(virt_addr0, virt_addr0);
+ dma_addr1 = _mm_unpacklo_epi64(virt_addr1, virt_addr1);
+#endif
+ dma_addr0 = _mm_add_epi64(dma_addr0, hdr_room);
+ dma_addr1 = _mm_add_epi64(dma_addr1, hdr_room);
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr0);
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &desc++->read), dma_addr1);
+ }
+ rxq->realloc_start += SXE2_RX_REARM_THRESH_VEC;
+ if (rxq->realloc_start >= rxq->ring_depth)
+ rxq->realloc_start = 0;
+ rxq->realloc_num -= SXE2_RX_REARM_THRESH_VEC;
+ new_tail = (rxq->realloc_start == 0) ?
+ (rxq->ring_depth - 1) : (rxq->realloc_start - 1);
+ SXE2_PCI_REG_WRITE_WC(rxq->rdt_reg_addr, new_tail);
+l_end:
+ return;
+}
+
+static __rte_always_inline __m128i
+sxe2_rx_desc_fnav_flags_sse(__m128i descs_arr[4])
+{
+ __m128i descs_tmp1, descs_tmp2;
+ __m128i descs_fnav_vld;
+ __m128i v_zeros, v_ffff, v_u32_one;
+ __m128i m_flags;
+ const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID);
+ descs_tmp1 = _mm_unpacklo_epi32(descs_arr[0], descs_arr[1]);
+ descs_tmp2 = _mm_unpacklo_epi32(descs_arr[2], descs_arr[3]);
+ descs_fnav_vld = _mm_unpacklo_epi64(descs_tmp1, descs_tmp2);
+ descs_fnav_vld = _mm_slli_epi32(descs_fnav_vld, 26);
+ descs_fnav_vld = _mm_srli_epi32(descs_fnav_vld, 31);
+ v_zeros = _mm_setzero_si128();
+ v_ffff = _mm_cmpeq_epi32(v_zeros, v_zeros);
+ v_u32_one = _mm_srli_epi32(v_ffff, 31);
+ m_flags = _mm_cmpeq_epi32(descs_fnav_vld, v_u32_one);
+ m_flags = _mm_and_si128(m_flags, fdir_flags);
+ return m_flags;
+}
+
+static __rte_always_inline void
+sxe2_rx_desc_offloads_para_fill_sse(struct sxe2_rx_queue *rxq,
+ volatile union sxe2_rx_desc *desc __rte_unused,
+ __m128i descs_arr[4],
+ struct rte_mbuf **rx_pkts)
+{
+ const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_init_value);
+ __m128i rearm_arr[4];
+ __m128i tmp_desc_lo, tmp_desc_hi, flags, tmp_flags;
+ const __m128i desc_flags_mask = _mm_set_epi32(0x00001C04, 0x00001C04,
+ 0x00001C04, 0x00001C04);
+ const __m128i desc_flags_rss_mask = _mm_set_epi32(0x20000000, 0x20000000,
+ 0x20000000, 0x20000000);
+ const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, RTE_MBUF_F_RX_VLAN |
+ RTE_MBUF_F_RX_VLAN_STRIPPED,
+ 0, 0, 0, 0);
+ const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, RTE_MBUF_F_RX_RSS_HASH,
+ 0, 0, 0, 0);
+ const __m128i cksum_flags =
+ _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
+ ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+ RTE_MBUF_F_RX_L4_CKSUM_BAD |
+ RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1),
+ ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+ RTE_MBUF_F_RX_L4_CKSUM_BAD |
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1),
+ ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+ RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1),
+ ((RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1),
+ ((RTE_MBUF_F_RX_L4_CKSUM_BAD |
+ RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1),
+ ((RTE_MBUF_F_RX_L4_CKSUM_BAD |
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1),
+ ((RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+ RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1),
+ ((RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1));
+ const __m128i cksum_mask =
+ _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+ RTE_MBUF_F_RX_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+ RTE_MBUF_F_RX_IP_CKSUM_MASK |
+ RTE_MBUF_F_RX_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+ RTE_MBUF_F_RX_IP_CKSUM_MASK |
+ RTE_MBUF_F_RX_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+ RTE_MBUF_F_RX_IP_CKSUM_MASK |
+ RTE_MBUF_F_RX_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
+ const __m128i vlan_mask =
+ _mm_set_epi32(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+ RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+ RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN |
+ RTE_MBUF_F_RX_VLAN_STRIPPED,
+ RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
+ flags = _mm_unpackhi_epi32(descs_arr[0], descs_arr[1]);
+ tmp_flags = _mm_unpackhi_epi32(descs_arr[2], descs_arr[3]);
+ tmp_desc_lo = _mm_unpacklo_epi64(flags, tmp_flags);
+ tmp_desc_hi = _mm_unpackhi_epi64(flags, tmp_flags);
+ tmp_desc_lo = _mm_and_si128(tmp_desc_lo, desc_flags_mask);
+ tmp_desc_hi = _mm_and_si128(tmp_desc_hi, desc_flags_rss_mask);
+ tmp_flags = _mm_shuffle_epi8(vlan_flags, tmp_desc_lo);
+ flags = _mm_and_si128(tmp_flags, vlan_mask);
+ tmp_desc_lo = _mm_srli_epi32(tmp_desc_lo, 10);
+ tmp_flags = _mm_shuffle_epi8(cksum_flags, tmp_desc_lo);
+ tmp_flags = _mm_slli_epi32(tmp_flags, 1);
+ tmp_flags = _mm_and_si128(tmp_flags, cksum_mask);
+ flags = _mm_or_si128(flags, tmp_flags);
+ tmp_desc_hi = _mm_srli_epi32(tmp_desc_hi, 27);
+ tmp_flags = _mm_shuffle_epi8(rss_flags, tmp_desc_hi);
+ flags = _mm_or_si128(flags, tmp_flags);
+#ifndef RTE_LIBRTE_SXE2_16BYTE_RX_DESC
+ if (rxq->fnav_enable) {
+ __m128i tmp_fnav_flags = sxe2_rx_desc_fnav_flags_sse(descs_arr);
+ flags = _mm_or_si128(flags, tmp_fnav_flags);
+ rx_pkts[0]->hash.fdir.hi = desc[0].wb.fd_filter_id;
+ rx_pkts[1]->hash.fdir.hi = desc[1].wb.fd_filter_id;
+ rx_pkts[2]->hash.fdir.hi = desc[2].wb.fd_filter_id;
+ rx_pkts[3]->hash.fdir.hi = desc[3].wb.fd_filter_id;
+ }
+#endif
+ rearm_arr[0] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 8), 0x30);
+ rearm_arr[1] = _mm_blend_epi16(mbuf_init, _mm_slli_si128(flags, 4), 0x30);
+ rearm_arr[2] = _mm_blend_epi16(mbuf_init, flags, 0x30);
+ rearm_arr[3] = _mm_blend_epi16(mbuf_init, _mm_srli_si128(flags, 4), 0x30);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
+ offsetof(struct rte_mbuf, rearm_data) + 8);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) !=
+ RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16));
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[0]->rearm_data), rearm_arr[0]);
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[1]->rearm_data), rearm_arr[1]);
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[2]->rearm_data), rearm_arr[2]);
+ _mm_store_si128(RTE_CAST_PTR(__m128i *, &rx_pkts[3]->rearm_data), rearm_arr[3]);
+}
+
+static inline uint16_t
+sxe2_rx_pkts_common_vec_sse(struct sxe2_rx_queue *rxq,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_rxe_flags,
+ uint8_t *umbcast_flags)
+{
+ volatile union sxe2_rx_desc *desc;
+ struct rte_mbuf **buffer;
+ __m128i descs_arr[SXE2_RX_NUM_PER_LOOP_SSE];
+ __m128i mbuf_arr[SXE2_RX_NUM_PER_LOOP_SSE];
+ __m128i staterr, sterr_tmp1, sterr_tmp2;
+ __m128i pmbuf0;
+ __m128i ptype_all;
+#ifdef RTE_ARCH_X86_64
+ __m128i pmbuf1;
+#endif
+ uint32_t i;
+ uint32_t bit_num;
+ uint16_t done_num = 0;
+ const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+ const __m128i crc_adjust =
+ _mm_set_epi16(0, 0, 0,
+ -rxq->crc_len,
+ 0, -rxq->crc_len,
+ 0, 0);
+ const __m128i rvp_shuf_mask =
+ _mm_set_epi8(7, 6, 5, 4,
+ 3, 2,
+ 13, 12,
+ 0XFF, 0xFF, 13, 12,
+ 0xFF, 0xFF, 0xFF, 0xFF);
+ const __m128i dd_mask = _mm_set_epi64x(0x0000000100000001LL,
+ 0x0000000100000001LL);
+ const __m128i eop_mask = _mm_slli_epi32(dd_mask,
+ SXE2_RX_DESC_STATUS_EOP_SHIFT);
+ const __m128i rxe_mask = _mm_set_epi64x(0x0000208000002080LL,
+ 0x0000208000002080LL);
+ const __m128i eop_shuf_mask = _mm_set_epi8(0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0x04, 0x0C,
+ 0x00, 0x08);
+ const __m128i ptype_mask = _mm_set_epi16(SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0,
+ SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0,
+ SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0,
+ SXE2_RX_DESC_PTYPE_MASK_NO_SHIFT, 0);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 4);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, vlan_tci) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 10);
+ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, hash) !=
+ offsetof(struct rte_mbuf, rx_descriptor_fields1) + 12);
+ desc = &rxq->desc_ring[rxq->processing_idx];
+ rte_prefetch0(desc);
+ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, SXE2_RX_NUM_PER_LOOP_SSE);
+ if (rxq->realloc_num > SXE2_RX_REARM_THRESH_VEC)
+ sxe2_rx_queue_rearm_sse(rxq);
+ if ((rte_le_to_cpu_64(desc->wb.status_err_ptype_len) &
+ SXE2_RX_DESC_STATUS_DD_MASK) == 0)
+ goto l_end;
+ buffer = &rxq->buffer_ring[rxq->processing_idx];
+ for (i = 0; i < nb_pkts; i += SXE2_RX_NUM_PER_LOOP_SSE,
+ desc += SXE2_RX_NUM_PER_LOOP_SSE) {
+ pmbuf0 = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, &buffer[i]));
+ descs_arr[3] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 3));
+ rte_compiler_barrier();
+ _mm_storeu_si128((__m128i *)&rx_pkts[i], pmbuf0);
+#ifdef RTE_ARCH_X86_64
+ pmbuf1 = _mm_loadu_si128((__m128i *)&buffer[i + 2]);
+#endif
+ descs_arr[2] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 2));
+ rte_compiler_barrier();
+ descs_arr[1] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc + 1));
+ rte_compiler_barrier();
+ descs_arr[0] = _mm_loadu_si128(RTE_CAST_PTR(__m128i *, desc));
+#ifdef RTE_ARCH_X86_64
+ _mm_storeu_si128((__m128i *)&rx_pkts[i + 2], pmbuf1);
+#endif
+ if (split_rxe_flags) {
+ rte_mbuf_prefetch_part2(rx_pkts[i]);
+ rte_mbuf_prefetch_part2(rx_pkts[i + 1]);
+ rte_mbuf_prefetch_part2(rx_pkts[i + 2]);
+ rte_mbuf_prefetch_part2(rx_pkts[i + 3]);
+ }
+ rte_compiler_barrier();
+ mbuf_arr[3] = _mm_shuffle_epi8(descs_arr[3], rvp_shuf_mask);
+ mbuf_arr[2] = _mm_shuffle_epi8(descs_arr[2], rvp_shuf_mask);
+ mbuf_arr[1] = _mm_shuffle_epi8(descs_arr[1], rvp_shuf_mask);
+ mbuf_arr[0] = _mm_shuffle_epi8(descs_arr[0], rvp_shuf_mask);
+ sterr_tmp2 = _mm_unpackhi_epi32(descs_arr[3], descs_arr[2]);
+ sterr_tmp1 = _mm_unpackhi_epi32(descs_arr[1], descs_arr[0]);
+ sxe2_rx_desc_offloads_para_fill_sse(rxq, desc, descs_arr, rx_pkts);
+ mbuf_arr[3] = _mm_add_epi16(mbuf_arr[3], crc_adjust);
+ mbuf_arr[2] = _mm_add_epi16(mbuf_arr[2], crc_adjust);
+ mbuf_arr[1] = _mm_add_epi16(mbuf_arr[1], crc_adjust);
+ mbuf_arr[0] = _mm_add_epi16(mbuf_arr[0], crc_adjust);
+ staterr = _mm_unpacklo_epi32(sterr_tmp1, sterr_tmp2);
+ ptype_all = _mm_and_si128(staterr, ptype_mask);
+ _mm_storeu_si128((void *)&rx_pkts[i + 3]->rx_descriptor_fields1,
+ mbuf_arr[3]);
+ _mm_storeu_si128((void *)&rx_pkts[i + 2]->rx_descriptor_fields1,
+ mbuf_arr[2]);
+ if (umbcast_flags != NULL) {
+ const __m128i umbcast_mask =
+ _mm_set_epi32(SXE2_RX_DESC_STATUS_UMBCAST_MASK,
+ SXE2_RX_DESC_STATUS_UMBCAST_MASK,
+ SXE2_RX_DESC_STATUS_UMBCAST_MASK,
+ SXE2_RX_DESC_STATUS_UMBCAST_MASK);
+ const __m128i umbcast_shuf_mask =
+ _mm_set_epi8(0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0xFF, 0xFF,
+ 0x07, 0x0F,
+ 0x03, 0x0B);
+ __m128i umbcast_bits = _mm_and_si128(staterr, umbcast_mask);
+ umbcast_bits = _mm_shuffle_epi8(umbcast_bits, umbcast_shuf_mask);
+ *(int32_t *)umbcast_flags = _mm_cvtsi128_si32(umbcast_bits);
+ umbcast_flags += SXE2_RX_NUM_PER_LOOP_SSE;
+ }
+ if (split_rxe_flags != NULL) {
+ __m128i eop_bits = _mm_andnot_si128(staterr, eop_mask);
+ __m128i rxe_bits = _mm_and_si128(staterr, rxe_mask);
+ rxe_bits = _mm_srli_epi32(rxe_bits, 7);
+ eop_bits = _mm_or_si128(eop_bits, rxe_bits);
+ eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask);
+ *(int32_t *)split_rxe_flags = _mm_cvtsi128_si32(eop_bits);
+ split_rxe_flags += SXE2_RX_NUM_PER_LOOP_SSE;
+ }
+ staterr = _mm_and_si128(staterr, dd_mask);
+ staterr = _mm_packs_epi32(staterr, _mm_setzero_si128());
+ _mm_storeu_si128((void *)&rx_pkts[i + 1]->rx_descriptor_fields1,
+ mbuf_arr[1]);
+ _mm_storeu_si128((void *)&rx_pkts[i]->rx_descriptor_fields1,
+ mbuf_arr[0]);
+ rx_pkts[i + 3]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 3)];
+ rx_pkts[i + 2]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 7)];
+ rx_pkts[i + 1]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 1)];
+ rx_pkts[i]->packet_type = ptype_tbl[_mm_extract_epi16(ptype_all, 5)];
+ bit_num = rte_popcount64(_mm_cvtsi128_si64(staterr));
+ done_num += bit_num;
+ if (likely(bit_num != SXE2_RX_NUM_PER_LOOP_SSE))
+ break;
+ }
+ rxq->processing_idx += done_num;
+ rxq->processing_idx &= (rxq->ring_depth - 1);
+ rxq->realloc_num += done_num;
+ PMD_LOG_DEBUG(RX, "port_id=%u queue_id=%u last_id=%u recv_pkts=%d",
+ rxq->port_id, rxq->queue_id, rxq->processing_idx, done_num);
+l_end:
+ return done_num;
+}
+
+static __rte_always_inline uint16_t
+sxe2_rx_pkts_scattered_batch_vec_sse(struct sxe2_rx_queue *rxq,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ const uint64_t *split_rxe_flags64;
+ uint8_t split_rxe_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0};
+ uint8_t umbcast_flags[SXE2_RX_PKTS_BURST_BATCH_NUM_VEC] = {0};
+ uint16_t rx_done_num;
+ uint16_t rx_pkt_done_num;
+ rx_pkt_done_num = 0;
+
+ if (rxq->vsi->adapter->devargs.sw_stats_en) {
+ rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts,
+ nb_pkts, split_rxe_flags, umbcast_flags);
+ } else {
+ rx_done_num = sxe2_rx_pkts_common_vec_sse(rxq, rx_pkts,
+ nb_pkts, split_rxe_flags, NULL);
+ }
+ if (rx_done_num == 0)
+ goto l_end;
+ if (!rxq->vsi->adapter->devargs.sw_stats_en) {
+ split_rxe_flags64 = (uint64_t *)split_rxe_flags;
+ if (rxq->pkt_first_seg == NULL &&
+ split_rxe_flags64[0] == 0 &&
+ split_rxe_flags64[1] == 0 &&
+ split_rxe_flags64[2] == 0 &&
+ split_rxe_flags64[3] == 0) {
+ rx_pkt_done_num = rx_done_num;
+ goto l_end;
+ }
+ if (rxq->pkt_first_seg == NULL) {
+ while (rx_pkt_done_num < rx_done_num &&
+ split_rxe_flags[rx_pkt_done_num] == 0)
+ rx_pkt_done_num++;
+ if (rx_pkt_done_num == rx_done_num)
+ goto l_end;
+ rxq->pkt_first_seg = rx_pkts[rx_pkt_done_num];
+ }
+ }
+ rx_pkt_done_num += sxe2_rx_pkts_refactor(rxq, &rx_pkts[rx_pkt_done_num],
+ rx_done_num - rx_pkt_done_num, &split_rxe_flags[rx_pkt_done_num],
+ &umbcast_flags[rx_pkt_done_num]);
+l_end:
+ return rx_pkt_done_num;
+}
+
+uint16_t sxe2_rx_pkts_scattered_vec_sse_offload(void *rx_queue,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ uint16_t done_num = 0;
+ uint16_t once_num;
+
+ while (nb_pkts > SXE2_RX_PKTS_BURST_BATCH_NUM_VEC) {
+ once_num =
+ sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue,
+ rx_pkts + done_num,
+ SXE2_RX_PKTS_BURST_BATCH_NUM_VEC);
+ done_num += once_num;
+ nb_pkts -= once_num;
+ if (once_num < SXE2_RX_PKTS_BURST_BATCH_NUM_VEC)
+ goto l_end;
+ }
+ done_num +=
+ sxe2_rx_pkts_scattered_batch_vec_sse((struct sxe2_rx_queue *)rx_queue,
+ rx_pkts + done_num, nb_pkts);
+l_end:
+ return done_num;
+}
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v15 11/11] net/sxe2: implement Tx done cleanup
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
` (9 preceding siblings ...)
2026-05-16 7:46 ` [PATCH v15 10/11] net/sxe2: add vectorized " liujie5
@ 2026-05-16 7:46 ` liujie5
10 siblings, 0 replies; 30+ messages in thread
From: liujie5 @ 2026-05-16 7:46 UTC (permalink / raw)
To: stephen; +Cc: dev, Jie Liu
From: Jie Liu <liujie5@linkdatatechnology.com>
This patch implements the 'tx_done_cleanup' ethdev ops in the sxe2
PMD. This interface allows applications to explicitly request the
driver to release mbufs that have been transmitted and are no longer
needed by the hardware.
The implementation iterates through the Tx ring, checking the status
of the descriptors starting from the last cleaned tail. It releases
the corresponding mbufs back to the mempool until either the requested
number of packets are freed or no more completed descriptors are
found.
Signed-off-by: Jie Liu <liujie5@linkdatatechnology.com>
---
drivers/net/sxe2/sxe2_ethdev.c | 1 +
drivers/net/sxe2/sxe2_txrx.h | 1 +
drivers/net/sxe2/sxe2_txrx_poll.c | 101 +++++++++++++++++++++++++++++-
3 files changed, 102 insertions(+), 1 deletion(-)
diff --git a/drivers/net/sxe2/sxe2_ethdev.c b/drivers/net/sxe2/sxe2_ethdev.c
index d1bdc22bd0..8d66e5d8c5 100644
--- a/drivers/net/sxe2/sxe2_ethdev.c
+++ b/drivers/net/sxe2/sxe2_ethdev.c
@@ -290,6 +290,7 @@ static const struct eth_dev_ops sxe2_eth_dev_ops = {
.txq_info_get = sxe2_tx_queue_info_get,
.rx_burst_mode_get = sxe2_rx_burst_mode_get,
.tx_burst_mode_get = sxe2_tx_burst_mode_get,
+ .tx_done_cleanup = sxe2_tx_done_cleanup,
};
struct sxe2_pci_map_bar_info *sxe2_dev_get_bar_info(struct sxe2_adapter *adapter,
diff --git a/drivers/net/sxe2/sxe2_txrx.h b/drivers/net/sxe2/sxe2_txrx.h
index 61c6641e49..6d3d7455c2 100644
--- a/drivers/net/sxe2/sxe2_txrx.h
+++ b/drivers/net/sxe2/sxe2_txrx.h
@@ -12,6 +12,7 @@ int32_t __rte_cold sxe2_tx_simple_batch_support_check(struct rte_eth_dev *dev,
uint32_t *batch_flags);
uint16_t sxe2_tx_pkts_prepare(void *tx_queue,
struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+int32_t sxe2_tx_done_cleanup(void *txq, uint32_t free_cnt);
void sxe2_tx_mode_func_set(struct rte_eth_dev *dev);
void __rte_cold sxe2_rx_queue_reset(struct sxe2_rx_queue *rxq);
void sxe2_rx_mode_func_set(struct rte_eth_dev *dev);
diff --git a/drivers/net/sxe2/sxe2_txrx_poll.c b/drivers/net/sxe2/sxe2_txrx_poll.c
index dc6a83e380..7302ceb6f5 100644
--- a/drivers/net/sxe2/sxe2_txrx_poll.c
+++ b/drivers/net/sxe2/sxe2_txrx_poll.c
@@ -8,12 +8,13 @@
#include <rte_malloc.h>
#include <rte_memzone.h>
#include <ethdev_driver.h>
-#include <unistd.h>
#include "sxe2_osal.h"
#include "sxe2_txrx_common.h"
+#include "sxe2_txrx_vec_common.h"
#include "sxe2_txrx_poll.h"
#include "sxe2_txrx.h"
+#include "sxe2_txrx_vec.h"
#include "sxe2_queue.h"
#include "sxe2_ethdev.h"
#include "sxe2_common_log.h"
@@ -118,6 +119,104 @@ static inline int32_t sxe2_tx_cleanup(struct sxe2_tx_queue *txq)
return ret;
}
+static int32_t sxe2_tx_done_cleanup_simple(struct sxe2_tx_queue *txq, uint32_t free_cnt)
+{
+ uint32_t free_cnt_align;
+ uint32_t free_cnt_once;
+ uint32_t i;
+
+ if (free_cnt == 0 || free_cnt > txq->ring_depth)
+ free_cnt = txq->ring_depth;
+
+ free_cnt_align = free_cnt - (free_cnt % txq->rs_thresh);
+ for (i = 0; i < free_cnt_align; i += free_cnt_once) {
+ if ((txq->ring_depth - txq->desc_free_num) < txq->rs_thresh)
+ break;
+
+ free_cnt_once = sxe2_tx_bufs_free(txq);
+ if (free_cnt_once == 0)
+ break;
+ }
+
+ return i;
+}
+
+static int32_t sxe2_tx_done_cleanup_normal(struct sxe2_tx_queue *txq, uint32_t free_cnt)
+{
+ struct sxe2_tx_buffer *buffer_ring = txq->buffer_ring;
+ int32_t ret;
+ uint16_t clean_last_idx, clean_idx;
+ uint16_t clean_last, clean_once;
+ uint16_t pkt_cnt, i;
+
+ if (txq->desc_free_num == 0 && sxe2_tx_cleanup(txq) != 0) {
+ ret = 0;
+ goto l_end;
+ }
+
+ if (free_cnt == 0)
+ free_cnt = txq->ring_depth;
+
+ clean_last_idx = txq->next_use;
+ clean_idx = buffer_ring[clean_last_idx].next_id;
+ clean_once = txq->desc_free_num;
+ clean_last = txq->desc_free_num;
+
+ for (pkt_cnt = 0; pkt_cnt < free_cnt;) {
+ for (i = 0; ((i < clean_once) &&
+ (pkt_cnt < free_cnt) &&
+ clean_idx != clean_last_idx); ++i) {
+ if (buffer_ring[clean_idx].mbuf != NULL) {
+ rte_pktmbuf_free_seg(buffer_ring[clean_idx].mbuf);
+ buffer_ring[clean_idx].mbuf = NULL;
+ if (buffer_ring[clean_idx].last_id == clean_idx)
+ pkt_cnt++;
+ }
+ clean_idx = buffer_ring[clean_idx].next_id;
+ }
+
+ if ((txq->rs_thresh > (txq->ring_depth - txq->desc_free_num)) ||
+ clean_idx == clean_last_idx)
+ break;
+
+ if (pkt_cnt < free_cnt) {
+ if (sxe2_tx_cleanup(txq) != 0)
+ break;
+
+ clean_once = txq->desc_free_num - clean_last;
+ clean_last = txq->desc_free_num;
+ }
+ }
+
+ ret = pkt_cnt;
+l_end:
+ return ret;
+}
+
+int32_t sxe2_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
+{
+ struct sxe2_tx_queue *txq = (struct sxe2_tx_queue *)tx_queue;
+ struct sxe2_adapter *adapter = txq->vsi->adapter;
+ int32_t ret;
+
+ if (txq == NULL) {
+ ret = 0;
+ goto l_end;
+ }
+ if (adapter->q_ctxt.tx_mode_flags & SXE2_TX_MODE_VEC_SET_MASK)
+ ret = -ENOTSUP;
+ else if (adapter->q_ctxt.tx_mode_flags & SXE2_TX_MODE_SIMPLE_BATCH)
+ ret = sxe2_tx_done_cleanup_simple(txq, free_cnt);
+ else
+ ret = sxe2_tx_done_cleanup_normal(txq, free_cnt);
+
+ PMD_LOG_DEBUG(TX, "TX cleanup done desc queue_id=%u free_cnt=%d.",
+ txq->queue_id, ret);
+
+l_end:
+ return ret;
+}
+
static __rte_always_inline uint16_t
sxe2_tx_pkt_data_desc_count(struct rte_mbuf *tx_pkt)
{
--
2.47.3
^ permalink raw reply related [flat|nested] 30+ messages in thread
end of thread, other threads:[~2026-05-16 7:47 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-14 2:01 [PATCH v13 0/5] Support add/remove memory region and get-max-slots pravin.bathija
2026-05-14 2:01 ` [PATCH v13 1/5] vhost: add user to mailmap and define to vhost hdr pravin.bathija
2026-05-14 2:01 ` [PATCH v13 2/5] vhost_user: header defines for add/rem mem region pravin.bathija
2026-05-14 2:01 ` [PATCH v13 3/5] vhost_user: support function defines for back-end pravin.bathija
2026-05-14 2:01 ` [PATCH v13 4/5] vhost_user: Function defs for add/rem mem regions pravin.bathija
2026-05-14 2:01 ` [PATCH v13 5/5] vhost_user: enable configure memory slots pravin.bathija
2026-05-16 2:55 ` [PATCH v14 00/11] net/sxe2: fix logic errors and address feedback liujie5
2026-05-16 2:55 ` [PATCH v14 01/11] mailmap: add Jie Liu liujie5
2026-05-16 2:55 ` [PATCH v14 02/11] doc: add sxe2 guide and release notes liujie5
2026-05-16 2:55 ` [PATCH v14 03/11] common/sxe2: add sxe2 basic structures liujie5
2026-05-16 2:55 ` [PATCH v14 04/11] drivers: add base driver skeleton liujie5
2026-05-16 2:55 ` [PATCH v14 05/11] drivers: add base driver probe skeleton liujie5
2026-05-16 2:55 ` [PATCH v14 06/11] drivers: support PCI BAR mapping liujie5
2026-05-16 2:55 ` [PATCH v14 07/11] common/sxe2: add ioctl interface for DMA map and unmap liujie5
2026-05-16 2:55 ` [PATCH v14 08/11] net/sxe2: support queue setup and control liujie5
2026-05-16 2:55 ` [PATCH v14 09/11] drivers: add data path for Rx and Tx liujie5
2026-05-16 2:55 ` [PATCH v14 10/11] net/sxe2: add vectorized " liujie5
2026-05-16 2:55 ` [PATCH v14 11/11] net/sxe2: implement Tx done cleanup liujie5
2026-05-16 7:46 ` [PATCH v15 00/11] net/sxe2: fix logic errors and address feedback liujie5
2026-05-16 7:46 ` [PATCH v15 01/11] mailmap: add Jie Liu liujie5
2026-05-16 7:46 ` [PATCH v15 02/11] doc: add sxe2 guide and release notes liujie5
2026-05-16 7:46 ` [PATCH v15 03/11] common/sxe2: add sxe2 basic structures liujie5
2026-05-16 7:46 ` [PATCH v15 04/11] drivers: add base driver skeleton liujie5
2026-05-16 7:46 ` [PATCH v15 05/11] drivers: add base driver probe skeleton liujie5
2026-05-16 7:46 ` [PATCH v15 06/11] drivers: support PCI BAR mapping liujie5
2026-05-16 7:46 ` [PATCH v15 07/11] common/sxe2: add ioctl interface for DMA map and unmap liujie5
2026-05-16 7:46 ` [PATCH v15 08/11] net/sxe2: support queue setup and control liujie5
2026-05-16 7:46 ` [PATCH v15 09/11] drivers: add data path for Rx and Tx liujie5
2026-05-16 7:46 ` [PATCH v15 10/11] net/sxe2: add vectorized " liujie5
2026-05-16 7:46 ` [PATCH v15 11/11] net/sxe2: implement Tx done cleanup liujie5
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox