* [PATCH 0/2] iavf: use ptype for LLDP and add AVX2 ctx paths
@ 2026-02-09 15:20 Ciara Loftus
2026-02-09 15:20 ` [PATCH 1/2] net/iavf: use mbuf packet type instead of dynfield for LLDP Ciara Loftus
` (2 more replies)
0 siblings, 3 replies; 25+ messages in thread
From: Ciara Loftus @ 2026-02-09 15:20 UTC (permalink / raw)
To: dev; +Cc: Ciara Loftus
This series changes the iavf driver to use the mbuf packet type
field instead of a dynamic field for LLDP packets, and adds new AVX2
context descriptor Tx paths which support LLDP.
Ciara Loftus (2):
net/iavf: use mbuf packet type instead of dynfield for LLDP
net/iavf: add AVX2 context descriptor Tx paths
doc/guides/nics/intel_vf.rst | 16 +-
doc/guides/rel_notes/release_26_03.rst | 2 +
drivers/net/intel/iavf/iavf.h | 2 +
drivers/net/intel/iavf/iavf_ethdev.c | 13 +-
drivers/net/intel/iavf/iavf_rxtx.c | 21 +-
drivers/net/intel/iavf/iavf_rxtx.h | 12 +-
drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 386 ++++++++++++++++++++
drivers/net/intel/iavf/iavf_testpmd.c | 20 +-
drivers/net/intel/iavf/rte_pmd_iavf.h | 10 +
9 files changed, 448 insertions(+), 34 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 25+ messages in thread* [PATCH 1/2] net/iavf: use mbuf packet type instead of dynfield for LLDP 2026-02-09 15:20 [PATCH 0/2] iavf: use ptype for LLDP and add AVX2 ctx paths Ciara Loftus @ 2026-02-09 15:20 ` Ciara Loftus 2026-02-09 16:10 ` Bruce Richardson 2026-02-09 15:20 ` [PATCH 2/2] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2 siblings, 1 reply; 25+ messages in thread From: Ciara Loftus @ 2026-02-09 15:20 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus Instead of using a dynamic mbuf field to flag a packet as LLDP, instead utilise the mbuf packet type field as the identifier instead. If the type is RTE_PTYPE_L2_ETHER_LLDP the packet is identified as LLDP. This approach is preferable because the use of dynamic mbuf fields should be reserved for features that are not easily implemented using the existing mbuf infrastructure. No negative performance impacts were observed with the new approach. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- doc/guides/nics/intel_vf.rst | 16 +++++++++------- doc/guides/rel_notes/release_26_03.rst | 1 + drivers/net/intel/iavf/iavf_ethdev.c | 13 ++++++++----- drivers/net/intel/iavf/iavf_rxtx.c | 3 +-- drivers/net/intel/iavf/iavf_rxtx.h | 8 +++----- drivers/net/intel/iavf/iavf_testpmd.c | 20 +++++--------------- drivers/net/intel/iavf/rte_pmd_iavf.h | 10 ++++++++++ 7 files changed, 37 insertions(+), 34 deletions(-) diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index bc600e4b58..197918b2e8 100644 --- a/doc/guides/nics/intel_vf.rst +++ b/doc/guides/nics/intel_vf.rst @@ -668,19 +668,21 @@ Inline IPsec Support Diagnostic Utilities -------------------- -Register mbuf dynfield to test Tx LLDP -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Enable Tx LLDP +~~~~~~~~~~~~~~ -Register an mbuf dynfield ``IAVF_TX_LLDP_DYNFIELD`` on ``dev_start`` -to indicate the need to send LLDP packet. -This dynfield needs to be set to 1 when preparing packet. - -For ``dpdk-testpmd`` application, it needs to stop and restart Tx port to take effect. +In order for the iavf PMD to transmit LLDP packets, two conditions must be met: +1. mbufs carrying LLDP packets must have their ptype set to RTE_PTYPE_L2_ETHER_LLDP +2. LLDP needs to be explicitly enabled eg. via ``dpdk-testpmd``: Usage:: testpmd> set tx lldp on +Note: the feature should be enabled before the device is started, so that a transmit +path that is capable of transmitting LLDP packets is selected ie. one that supports +context descriptors. + Limitations or Knowing issues ----------------------------- diff --git a/doc/guides/rel_notes/release_26_03.rst b/doc/guides/rel_notes/release_26_03.rst index a0f89b5ea2..c58c5cebd0 100644 --- a/doc/guides/rel_notes/release_26_03.rst +++ b/doc/guides/rel_notes/release_26_03.rst @@ -62,6 +62,7 @@ New Features * **Updated Intel iavf driver.** * Added support for pre and post VF reset callbacks. + * Changed LLDP packet detection from dynamic mbuf field to mbuf packet_type. Removed Items diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c index 802e095174..ae5fd86171 100644 --- a/drivers/net/intel/iavf/iavf_ethdev.c +++ b/drivers/net/intel/iavf/iavf_ethdev.c @@ -45,7 +45,7 @@ #define IAVF_MBUF_CHECK_ARG "mbuf_check" uint64_t iavf_timestamp_dynflag; int iavf_timestamp_dynfield_offset = -1; -int rte_pmd_iavf_tx_lldp_dynfield_offset = -1; +bool iavf_tx_lldp_enabled; static const char * const iavf_valid_args[] = { IAVF_PROTO_XTR_ARG, @@ -1024,10 +1024,6 @@ iavf_dev_start(struct rte_eth_dev *dev) } } - /* Check Tx LLDP dynfield */ - rte_pmd_iavf_tx_lldp_dynfield_offset = - rte_mbuf_dynfield_lookup(IAVF_TX_LLDP_DYNFIELD, NULL); - if (iavf_init_queues(dev) != 0) { PMD_DRV_LOG(ERR, "failed to do Queue init"); return -1; @@ -3203,6 +3199,13 @@ rte_pmd_iavf_reinit(uint16_t port) return 0; } +RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_iavf_enable_tx_lldp, 26.03) +void +rte_pmd_iavf_enable_tx_lldp(bool enable) +{ + iavf_tx_lldp_enabled = enable; +} + static int iavf_validate_reset_cb(uint16_t port, void *cb, void *cb_arg) { diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index 4b763627bc..2fdd0f5ffe 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -4243,8 +4243,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) if (iavf_tx_vec_dev_check(dev) != -1) req_features.simd_width = iavf_get_max_simd_bitwidth(); - if (rte_pmd_iavf_tx_lldp_dynfield_offset > 0) - req_features.ctx_desc = true; + req_features.ctx_desc = iavf_tx_lldp_enabled; for (i = 0; i < dev->data->nb_tx_queues; i++) { txq = dev->data->tx_queues[i]; diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h index e1f78dcde0..f8d75abe35 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.h +++ b/drivers/net/intel/iavf/iavf_rxtx.h @@ -168,14 +168,12 @@ #define IAVF_TX_LLDP_DYNFIELD "intel_pmd_dynfield_tx_lldp" #define IAVF_CHECK_TX_LLDP(m) \ - ((rte_pmd_iavf_tx_lldp_dynfield_offset > 0) && \ - (*RTE_MBUF_DYNFIELD((m), \ - rte_pmd_iavf_tx_lldp_dynfield_offset, \ - uint8_t *))) + (iavf_tx_lldp_enabled && \ + ((m)->packet_type & RTE_PTYPE_L2_MASK) == RTE_PTYPE_L2_ETHER_LLDP) extern uint64_t iavf_timestamp_dynflag; extern int iavf_timestamp_dynfield_offset; -extern int rte_pmd_iavf_tx_lldp_dynfield_offset; +extern bool iavf_tx_lldp_enabled; typedef void (*iavf_rxd_to_pkt_fields_t)(struct ci_rx_queue *rxq, struct rte_mbuf *mb, diff --git a/drivers/net/intel/iavf/iavf_testpmd.c b/drivers/net/intel/iavf/iavf_testpmd.c index 4731d0b61b..44dc568e91 100644 --- a/drivers/net/intel/iavf/iavf_testpmd.c +++ b/drivers/net/intel/iavf/iavf_testpmd.c @@ -39,21 +39,11 @@ cmd_enable_tx_lldp_parsed(void *parsed_result, __rte_unused struct cmdline *cl, __rte_unused void *data) { struct cmd_enable_tx_lldp_result *res = parsed_result; - const struct rte_mbuf_dynfield iavf_tx_lldp_dynfield = { - .name = IAVF_TX_LLDP_DYNFIELD, - .size = sizeof(uint8_t), - .align = alignof(uint8_t), - .flags = 0 - }; - int offset; - - if (strncmp(res->what, "on", 2) == 0) { - offset = rte_mbuf_dynfield_register(&iavf_tx_lldp_dynfield); - printf("rte_pmd_iavf_tx_lldp_dynfield_offset: %d", offset); - if (offset < 0) - fprintf(stderr, - "rte mbuf dynfield register failed, offset: %d", offset); - } + bool enable = strncmp(res->what, "on", 2) == 0; + + rte_pmd_iavf_enable_tx_lldp(enable); + + printf("Tx LLDP %s on iavf driver\n", enable ? "enabled" : "disabled"); } static cmdline_parse_inst_t cmd_enable_tx_lldp = { diff --git a/drivers/net/intel/iavf/rte_pmd_iavf.h b/drivers/net/intel/iavf/rte_pmd_iavf.h index df4e947e85..2ae83dcbce 100644 --- a/drivers/net/intel/iavf/rte_pmd_iavf.h +++ b/drivers/net/intel/iavf/rte_pmd_iavf.h @@ -154,6 +154,16 @@ int rte_pmd_iavf_register_post_reset_cb(uint16_t port, iavf_post_reset_cb_t post_reset_cb, void *post_reset_cb_arg); +/** + * Enable or disable Tx LLDP on the iavf driver. + * + * @param enable + * Set to true to enable Tx LLDP, false to disable. + */ +__rte_experimental +void +rte_pmd_iavf_enable_tx_lldp(bool enable); + /** * The mbuf dynamic field pointer for flexible descriptor's extraction metadata. */ -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH 1/2] net/iavf: use mbuf packet type instead of dynfield for LLDP 2026-02-09 15:20 ` [PATCH 1/2] net/iavf: use mbuf packet type instead of dynfield for LLDP Ciara Loftus @ 2026-02-09 16:10 ` Bruce Richardson 2026-03-06 11:49 ` Loftus, Ciara 0 siblings, 1 reply; 25+ messages in thread From: Bruce Richardson @ 2026-02-09 16:10 UTC (permalink / raw) To: Ciara Loftus; +Cc: dev On Mon, Feb 09, 2026 at 03:20:48PM +0000, Ciara Loftus wrote: > Instead of using a dynamic mbuf field to flag a packet as LLDP, instead > utilise the mbuf packet type field as the identifier instead. If the > type is RTE_PTYPE_L2_ETHER_LLDP the packet is identified as LLDP. This > approach is preferable because the use of dynamic mbuf fields should be > reserved for features that are not easily implemented using the existing > mbuf infrastructure. No negative performance impacts were observed with > the new approach. > > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> For ABI compatibility, though, I think we need to keep the old method of identifying LLDP packets there too. While I completely agree that using packet types is the better approach, we need to go through a deprecation process for the old dynamic field method, though. > --- > doc/guides/nics/intel_vf.rst | 16 +++++++++------- > doc/guides/rel_notes/release_26_03.rst | 1 + > drivers/net/intel/iavf/iavf_ethdev.c | 13 ++++++++----- > drivers/net/intel/iavf/iavf_rxtx.c | 3 +-- > drivers/net/intel/iavf/iavf_rxtx.h | 8 +++----- > drivers/net/intel/iavf/iavf_testpmd.c | 20 +++++--------------- > drivers/net/intel/iavf/rte_pmd_iavf.h | 10 ++++++++++ > 7 files changed, 37 insertions(+), 34 deletions(-) > > diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst > index bc600e4b58..197918b2e8 100644 > --- a/doc/guides/nics/intel_vf.rst > +++ b/doc/guides/nics/intel_vf.rst > @@ -668,19 +668,21 @@ Inline IPsec Support > Diagnostic Utilities > -------------------- > <snip> > > +RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_iavf_enable_tx_lldp, 26.03) > +void > +rte_pmd_iavf_enable_tx_lldp(bool enable) > +{ > + iavf_tx_lldp_enabled = enable; > +} > + Rather than a private function, this looks something that should be a feature flag or capability somewhere, e.g. like TSO. ^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH 1/2] net/iavf: use mbuf packet type instead of dynfield for LLDP 2026-02-09 16:10 ` Bruce Richardson @ 2026-03-06 11:49 ` Loftus, Ciara 0 siblings, 0 replies; 25+ messages in thread From: Loftus, Ciara @ 2026-03-06 11:49 UTC (permalink / raw) To: Richardson, Bruce; +Cc: dev@dpdk.org > Subject: Re: [PATCH 1/2] net/iavf: use mbuf packet type instead of dynfield for > LLDP > > On Mon, Feb 09, 2026 at 03:20:48PM +0000, Ciara Loftus wrote: > > Instead of using a dynamic mbuf field to flag a packet as LLDP, instead > > utilise the mbuf packet type field as the identifier instead. If the > > type is RTE_PTYPE_L2_ETHER_LLDP the packet is identified as LLDP. This > > approach is preferable because the use of dynamic mbuf fields should be > > reserved for features that are not easily implemented using the existing > > mbuf infrastructure. No negative performance impacts were observed with > > the new approach. > > > > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> > > For ABI compatibility, though, I think we need to keep the old method of > identifying LLDP packets there too. While I completely agree that using > packet types is the better approach, we need to go through a deprecation > process for the old dynamic field method, though. I will submit a v2 supporting both the dynfield and ptype approach, and a deprecation notice flagging the future removal of the dynfield. > > > --- > > doc/guides/nics/intel_vf.rst | 16 +++++++++------- > > doc/guides/rel_notes/release_26_03.rst | 1 + > > drivers/net/intel/iavf/iavf_ethdev.c | 13 ++++++++----- > > drivers/net/intel/iavf/iavf_rxtx.c | 3 +-- > > drivers/net/intel/iavf/iavf_rxtx.h | 8 +++----- > > drivers/net/intel/iavf/iavf_testpmd.c | 20 +++++--------------- > > drivers/net/intel/iavf/rte_pmd_iavf.h | 10 ++++++++++ > > 7 files changed, 37 insertions(+), 34 deletions(-) > > > > diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst > > index bc600e4b58..197918b2e8 100644 > > --- a/doc/guides/nics/intel_vf.rst > > +++ b/doc/guides/nics/intel_vf.rst > > @@ -668,19 +668,21 @@ Inline IPsec Support > > Diagnostic Utilities > > -------------------- > > > > <snip> > > > > > +RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_iavf_enable_tx_lldp, > 26.03) > > +void > > +rte_pmd_iavf_enable_tx_lldp(bool enable) > > +{ > > + iavf_tx_lldp_enabled = enable; > > +} > > + > Rather than a private function, this looks something that should be a > feature flag or capability somewhere, e.g. like TSO. I think that would be a neater solution for iavf, but I can't find any evidence of any other driver requiring this sort of explicit enabling of LLDP as a feature. So it would be hard to justify adding such a feature flag/capability when it looks like there might only be one driver to use it. I will consider this some more. Since we are keeping the dynfield implementation for now, registering the dynfield is equivalent to enabling the feature so we don't need an API like this in the v2. When we eventually remove the dynfield I will try to find a better solution for enabling the feature, that isn't a private function. ^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 2/2] net/iavf: add AVX2 context descriptor Tx paths 2026-02-09 15:20 [PATCH 0/2] iavf: use ptype for LLDP and add AVX2 ctx paths Ciara Loftus 2026-02-09 15:20 ` [PATCH 1/2] net/iavf: use mbuf packet type instead of dynfield for LLDP Ciara Loftus @ 2026-02-09 15:20 ` Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2 siblings, 0 replies; 25+ messages in thread From: Ciara Loftus @ 2026-02-09 15:20 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus Prior to this commit the transmit paths implemented for AVX2 used the transmit descriptor only, making some offloads and features unavailable on the AVX2 path, like LLDP. Enable two new AVX2 transmit paths, both of which support using a context descriptor, one which performs offload and the other which does not. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- doc/guides/rel_notes/release_26_03.rst | 1 + drivers/net/intel/iavf/iavf.h | 2 + drivers/net/intel/iavf/iavf_rxtx.c | 18 + drivers/net/intel/iavf/iavf_rxtx.h | 4 + drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 386 ++++++++++++++++++++ 5 files changed, 411 insertions(+) diff --git a/doc/guides/rel_notes/release_26_03.rst b/doc/guides/rel_notes/release_26_03.rst index c58c5cebd0..3b16f0b00c 100644 --- a/doc/guides/rel_notes/release_26_03.rst +++ b/doc/guides/rel_notes/release_26_03.rst @@ -63,6 +63,7 @@ New Features * Added support for pre and post VF reset callbacks. * Changed LLDP packet detection from dynamic mbuf field to mbuf packet_type. + * Implemented AVX2 context descriptor transmit paths. Removed Items diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index 39949acc11..d4dd48d520 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -357,6 +357,8 @@ enum iavf_tx_func_type { IAVF_TX_DEFAULT, IAVF_TX_AVX2, IAVF_TX_AVX2_OFFLOAD, + IAVF_TX_AVX2_CTX, + IAVF_TX_AVX2_CTX_OFFLOAD, IAVF_TX_AVX512, IAVF_TX_AVX512_OFFLOAD, IAVF_TX_AVX512_CTX, diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index 2fdd0f5ffe..6effc97c07 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -3974,6 +3974,24 @@ static const struct ci_tx_path_info iavf_tx_path_infos[] = { .simd_width = RTE_VECT_SIMD_256 } }, + [IAVF_TX_AVX2_CTX] = { + .pkt_burst = iavf_xmit_pkts_vec_avx2_ctx, + .info = "Vector AVX2 Ctx", + .features = { + .tx_offloads = IAVF_TX_VECTOR_OFFLOADS, + .simd_width = RTE_VECT_SIMD_256, + .ctx_desc = true + } + }, + [IAVF_TX_AVX2_CTX_OFFLOAD] = { + .pkt_burst = iavf_xmit_pkts_vec_avx2_ctx_offload, + .info = "Vector AVX2 Ctx Offload", + .features = { + .tx_offloads = IAVF_TX_VECTOR_CTX_OFFLOAD_OFFLOADS, + .simd_width = RTE_VECT_SIMD_256, + .ctx_desc = true + } + }, #ifdef CC_AVX512_SUPPORT [IAVF_TX_AVX512] = { .pkt_burst = iavf_xmit_pkts_vec_avx512, diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h index f8d75abe35..147f1d03f1 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.h +++ b/drivers/net/intel/iavf/iavf_rxtx.h @@ -609,6 +609,10 @@ uint16_t iavf_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t iavf_xmit_pkts_vec_avx2_offload(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t iavf_xmit_pkts_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +uint16_t iavf_xmit_pkts_vec_avx2_ctx_offload(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); int iavf_rx_vec_dev_check(struct rte_eth_dev *dev); int iavf_tx_vec_dev_check(struct rte_eth_dev *dev); diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c index e29958e0bc..8c2bc73819 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c @@ -1774,6 +1774,392 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_pkts; } +static inline void +iavf_fill_ctx_desc_tunneling_avx2(uint64_t *low_ctx_qw, struct rte_mbuf *pkt) +{ + if (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + uint64_t eip_typ = IAVF_TX_CTX_DESC_EIPT_NONE; + uint64_t eip_len = 0; + uint64_t eip_noinc = 0; + /* Default - IP_ID is increment in each segment of LSO */ + + switch (pkt->ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | + RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM)) { + case RTE_MBUF_F_TX_OUTER_IPV4: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_NO_CHECKSUM_OFFLOAD; + eip_len = pkt->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_CHECKSUM_OFFLOAD; + eip_len = pkt->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV6: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV6; + eip_len = pkt->outer_l3_len >> 2; + break; + } + + /* L4TUNT: L4 Tunneling Type */ + switch (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (pkt->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if ((eip_typ & (IAVF_TX_CTX_EXT_IP_IPV4 | + IAVF_TX_CTX_EXT_IP_IPV6 | + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM)) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) && + (pkt->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + + *low_ctx_qw = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | + eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | + eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT; + + } else { + *low_ctx_qw = 0; + } +} + +static inline void +iavf_fill_ctx_desc_tunneling_field(volatile uint64_t *qw0, + const struct rte_mbuf *m) +{ + uint64_t eip_typ = IAVF_TX_CTX_DESC_EIPT_NONE; + uint64_t eip_len = 0; + uint64_t eip_noinc = 0; + /* Default - IP_ID is increment in each segment of LSO */ + + switch (m->ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | + RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM)) { + case RTE_MBUF_F_TX_OUTER_IPV4: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_NO_CHECKSUM_OFFLOAD; + eip_len = m->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_CHECKSUM_OFFLOAD; + eip_len = m->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV6: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV6; + eip_len = m->outer_l3_len >> 2; + break; + } + + /* L4TUNT: L4 Tunneling Type */ + switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (m->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if ((eip_typ & (IAVF_TX_CTX_EXT_IP_IPV6 | + IAVF_TX_CTX_EXT_IP_IPV4 | + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM)) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) && + (m->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + + *qw0 = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | + eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | + eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT; +} + +static __rte_always_inline void +ctx_vtx1(volatile struct iavf_tx_desc *txdp, struct rte_mbuf *pkt, + uint64_t flags, bool offload, uint8_t vlan_flag) +{ + uint64_t high_ctx_qw = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t low_ctx_qw = 0; + + if (offload) { + iavf_fill_ctx_desc_tunneling_avx2(&low_ctx_qw, pkt); +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (pkt->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt->vlan_tci_outer : + (uint64_t)pkt->vlan_tci; + high_ctx_qw |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + high_ctx_qw |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw |= (uint64_t)pkt->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } +#endif + } + if (IAVF_CHECK_TX_LLDP(pkt)) + high_ctx_qw |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + uint64_t high_data_qw = (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT) | + ((uint64_t)pkt->data_len << IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)); + if (offload) + iavf_txd_enable_offload(pkt, &high_data_qw, vlan_flag); + + __m256i ctx_data_desc = _mm256_set_epi64x(high_data_qw, pkt->buf_iova + pkt->data_off, + high_ctx_qw, low_ctx_qw); + + _mm256_storeu_si256(RTE_CAST_PTR(__m256i *, txdp), ctx_data_desc); +} + +static __rte_always_inline void +ctx_vtx(volatile struct iavf_tx_desc *txdp, + struct rte_mbuf **pkt, uint16_t nb_pkts, uint64_t flags, + bool offload, uint8_t vlan_flag) +{ + uint64_t hi_data_qw_tmpl = (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT)); + + /* if unaligned on 32-bit boundary, do one to align */ + if (((uintptr_t)txdp & 0x1F) != 0 && nb_pkts != 0) { + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag); + nb_pkts--, txdp++, pkt++; + } + + for (; nb_pkts > 1; txdp += 4, pkt += 2, nb_pkts -= 2) { + uint64_t hi_ctx_qw1 = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t hi_ctx_qw0 = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t low_ctx_qw1 = 0; + uint64_t low_ctx_qw0 = 0; + uint64_t hi_data_qw1 = 0; + uint64_t hi_data_qw0 = 0; + + hi_data_qw1 = hi_data_qw_tmpl | + ((uint64_t)pkt[1]->data_len << + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT); + hi_data_qw0 = hi_data_qw_tmpl | + ((uint64_t)pkt[0]->data_len << + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT); + +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (offload) { + if (pkt[1]->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt[1]->vlan_tci : + (uint64_t)pkt[1]->vlan_tci_outer; + hi_ctx_qw1 |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw1 |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if (pkt[1]->ol_flags & RTE_MBUF_F_TX_VLAN && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + hi_ctx_qw1 |= + IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw1 |= + (uint64_t)pkt[1]->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } + } +#endif + if (IAVF_CHECK_TX_LLDP(pkt[1])) + hi_ctx_qw1 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (offload) { + if (pkt[0]->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt[0]->vlan_tci : + (uint64_t)pkt[0]->vlan_tci_outer; + hi_ctx_qw0 |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw0 |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if (pkt[0]->ol_flags & RTE_MBUF_F_TX_VLAN && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + hi_ctx_qw0 |= + IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw0 |= + (uint64_t)pkt[0]->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } + } +#endif + if (IAVF_CHECK_TX_LLDP(pkt[0])) + hi_ctx_qw0 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + + if (offload) { + iavf_txd_enable_offload(pkt[1], &hi_data_qw1, vlan_flag); + iavf_txd_enable_offload(pkt[0], &hi_data_qw0, vlan_flag); + iavf_fill_ctx_desc_tunneling_field(&low_ctx_qw1, pkt[1]); + iavf_fill_ctx_desc_tunneling_field(&low_ctx_qw0, pkt[0]); + } + + __m256i desc2_3 = + _mm256_set_epi64x + (hi_data_qw1, pkt[1]->buf_iova + pkt[1]->data_off, + hi_ctx_qw1, low_ctx_qw1); + __m256i desc0_1 = + _mm256_set_epi64x + (hi_data_qw0, pkt[0]->buf_iova + pkt[0]->data_off, + hi_ctx_qw0, low_ctx_qw0); + _mm256_store_si256(RTE_CAST_PTR(__m256i *, txdp + 2), desc2_3); + _mm256_store_si256(RTE_CAST_PTR(__m256i *, txdp), desc0_1); + } + + if (nb_pkts) + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag); +} + +static __rte_always_inline uint16_t +iavf_xmit_fixed_burst_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts, bool offload) +{ + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; + volatile struct iavf_tx_desc *txdp; + struct ci_tx_entry_vec *txep; + uint16_t n, nb_commit, nb_mbuf, tx_id; + /* bit2 is reserved and must be set to 1 according to Spec */ + uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; + uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; + + if (txq->nb_tx_free < txq->tx_free_thresh) + ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true); + + nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1); + nb_commit &= 0xFFFE; + if (unlikely(nb_commit == 0)) + return 0; + + nb_pkts = nb_commit >> 1; + tx_id = txq->tx_tail; + txdp = &txq->iavf_tx_ring[tx_id]; + txep = (void *)txq->sw_ring; + txep += (tx_id >> 1); + + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit); + n = (uint16_t)(txq->nb_tx_desc - tx_id); + + if (n != 0 && nb_commit >= n) { + nb_mbuf = n >> 1; + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_mbuf); + + ctx_vtx(txdp, tx_pkts, nb_mbuf - 1, flags, offload, txq->vlan_flag); + tx_pkts += (nb_mbuf - 1); + txdp += (n - 2); + ctx_vtx1(txdp, *tx_pkts++, rs, offload, txq->vlan_flag); + + nb_commit = (uint16_t)(nb_commit - n); + + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); + tx_id = 0; + /* avoid reach the end of ring */ + txdp = txq->iavf_tx_ring; + txep = (void *)txq->sw_ring; + } + + nb_mbuf = nb_commit >> 1; + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_mbuf); + + ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag); + tx_id = (uint16_t)(tx_id + nb_commit); + + if (tx_id > txq->tx_next_rs) { + txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << + IAVF_TXD_QW1_CMD_SHIFT); + txq->tx_next_rs = + (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); + } + + txq->tx_tail = tx_id; + + IAVF_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail); + return nb_pkts; +} + +static __rte_always_inline uint16_t +iavf_xmit_pkts_vec_avx2_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts, bool offload) +{ + uint16_t nb_tx = 0; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; + + while (nb_pkts) { + uint16_t ret, num; + + /* cross rs_thresh boundary is not allowed */ + num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh); + num = num >> 1; + ret = iavf_xmit_fixed_burst_vec_avx2_ctx(tx_queue, &tx_pkts[nb_tx], + num, offload); + nb_tx += ret; + nb_pkts -= ret; + if (ret < num) + break; + } + + return nb_tx; +} + +uint16_t +iavf_xmit_pkts_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return iavf_xmit_pkts_vec_avx2_ctx_cmn(tx_queue, tx_pkts, nb_pkts, false); +} + +uint16_t +iavf_xmit_pkts_vec_avx2_ctx_offload(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return iavf_xmit_pkts_vec_avx2_ctx_cmn(tx_queue, tx_pkts, nb_pkts, true); +} + static __rte_always_inline uint16_t iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 0/3] iavf: LLDP ptype and AVX2 ctx paths 2026-02-09 15:20 [PATCH 0/2] iavf: use ptype for LLDP and add AVX2 ctx paths Ciara Loftus 2026-02-09 15:20 ` [PATCH 1/2] net/iavf: use mbuf packet type instead of dynfield for LLDP Ciara Loftus 2026-02-09 15:20 ` [PATCH 2/2] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus @ 2026-03-06 11:52 ` Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 1/3] net/iavf: support LLDP Tx based on mbuf ptype or dynfield Ciara Loftus ` (3 more replies) 2 siblings, 4 replies; 25+ messages in thread From: Ciara Loftus @ 2026-03-06 11:52 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus This series adds another way to detect LLDP packets in the Tx paths of the iavf PMD, which is based on the mbuf packet type field. This is in addition to the existing method of using a dynamic field. The series also adds new AVX2 context descriptor Tx paths that support LLDP. Finally, a deprecation notice is added to the documentation to flag that the dynamic mbuf field method of LLDP packet detection will be removed in a future release. v2: * Support both dynfield and ptype to preserve ABI * Add deprecation notice for dynfield approach Ciara Loftus (3): net/iavf: support LLDP Tx based on mbuf ptype or dynfield net/iavf: add AVX2 context descriptor Tx paths doc: announce change to LLDP packet detection in iavf PMD doc/guides/nics/intel_vf.rst | 6 + doc/guides/rel_notes/deprecation.rst | 4 + doc/guides/rel_notes/release_26_03.rst | 2 + drivers/net/intel/iavf/iavf.h | 2 + drivers/net/intel/iavf/iavf_rxtx.c | 18 + drivers/net/intel/iavf/iavf_rxtx.h | 9 +- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 386 ++++++++++++++++++++ 7 files changed, 424 insertions(+), 3 deletions(-) -- 2.43.0 ^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v2 1/3] net/iavf: support LLDP Tx based on mbuf ptype or dynfield 2026-03-06 11:52 ` [PATCH v2 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus @ 2026-03-06 11:52 ` Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus ` (2 subsequent siblings) 3 siblings, 0 replies; 25+ messages in thread From: Ciara Loftus @ 2026-03-06 11:52 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus Prior to this commit the only way to flag to the iavf PMD that an mbuf contained an LLDP packet intended for transmission was to register and set a dynamic mbuf field. This commit introduces an alternative approach: rather than setting the dynamic mbuf field, the user may instead set the packet type of the mbuf to RTE_PTYPE_L2_ETHER_LLDP. If the LLDP feature is enabled (ie. if the dynamic mbuf field is registered), on Tx the driver will check both the dynamic mbuf field and the packet type, and if either indicates that the packet is an LLDP packet, it will be transmitted. Using the packet type to identify an LLDP packet instead of the dynamic mbuf field is preferred because the use of dynamic mbuf fields should be reserved for features that are not easily implemented using the existing mbuf infrastructure. Also, it may remove some overhead in the application if the Rx driver sets the packet type to LLDP as that would remove the burden from the application to explicitly set any fields before Tx. It is intended that the dynamic mbuf field approach will be removed in a future release, at which point only the packet type approach will be supported. A deprecation notice announcing this intention will be submitted separately. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- v2: * Support both dynfield and ptype methods to preserve ABI --- doc/guides/nics/intel_vf.rst | 6 ++++++ doc/guides/rel_notes/release_26_03.rst | 1 + drivers/net/intel/iavf/iavf_rxtx.h | 5 ++--- 3 files changed, 9 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index bc600e4b58..ad25389521 100644 --- a/doc/guides/nics/intel_vf.rst +++ b/doc/guides/nics/intel_vf.rst @@ -681,6 +681,12 @@ Usage:: testpmd> set tx lldp on +An alternative method for transmitting LLDP packets is to register the dynamic field as +above and, rather than setting the dynfield value, set the ``packet_type`` of the mbuf to +``RTE_PTYPE_L2_ETHER_LLDP``. The driver will check both the dynamic field and the packet +type, and if either indicates that the packet is an LLDP packet, the driver will transmit +it. + Limitations or Knowing issues ----------------------------- diff --git a/doc/guides/rel_notes/release_26_03.rst b/doc/guides/rel_notes/release_26_03.rst index 884d2535df..5ad531aef5 100644 --- a/doc/guides/rel_notes/release_26_03.rst +++ b/doc/guides/rel_notes/release_26_03.rst @@ -80,6 +80,7 @@ New Features * **Updated Intel iavf driver.** * Added support for pre and post VF reset callbacks. + * Added support for transmitting LLDP packets based on mbuf packet type. * **Updated Intel ice driver.** diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h index 80b06518b0..1db1267eec 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.h +++ b/drivers/net/intel/iavf/iavf_rxtx.h @@ -158,9 +158,8 @@ #define IAVF_TX_LLDP_DYNFIELD "intel_pmd_dynfield_tx_lldp" #define IAVF_CHECK_TX_LLDP(m) \ ((rte_pmd_iavf_tx_lldp_dynfield_offset > 0) && \ - (*RTE_MBUF_DYNFIELD((m), \ - rte_pmd_iavf_tx_lldp_dynfield_offset, \ - uint8_t *))) + ((*RTE_MBUF_DYNFIELD((m), rte_pmd_iavf_tx_lldp_dynfield_offset, uint8_t *)) || \ + ((m)->packet_type & RTE_PTYPE_L2_MASK) == RTE_PTYPE_L2_ETHER_LLDP)) extern uint64_t iavf_timestamp_dynflag; extern int iavf_timestamp_dynfield_offset; -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 2/3] net/iavf: add AVX2 context descriptor Tx paths 2026-03-06 11:52 ` [PATCH v2 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 1/3] net/iavf: support LLDP Tx based on mbuf ptype or dynfield Ciara Loftus @ 2026-03-06 11:52 ` Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus 2026-04-17 10:08 ` [PATCH v3 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 3 siblings, 0 replies; 25+ messages in thread From: Ciara Loftus @ 2026-03-06 11:52 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus Prior to this commit the transmit paths implemented for AVX2 used the transmit descriptor only, making some offloads and features unavailable on the AVX2 path, like LLDP. Enable two new AVX2 transmit paths, both of which support using a context descriptor, one which performs offload and the other which does not. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- doc/guides/rel_notes/release_26_03.rst | 1 + drivers/net/intel/iavf/iavf.h | 2 + drivers/net/intel/iavf/iavf_rxtx.c | 18 + drivers/net/intel/iavf/iavf_rxtx.h | 4 + drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 386 ++++++++++++++++++++ 5 files changed, 411 insertions(+) diff --git a/doc/guides/rel_notes/release_26_03.rst b/doc/guides/rel_notes/release_26_03.rst index 5ad531aef5..bf21bd9d01 100644 --- a/doc/guides/rel_notes/release_26_03.rst +++ b/doc/guides/rel_notes/release_26_03.rst @@ -81,6 +81,7 @@ New Features * Added support for pre and post VF reset callbacks. * Added support for transmitting LLDP packets based on mbuf packet type. + * Implemented AVX2 context descriptor transmit paths. * **Updated Intel ice driver.** diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index 403c61e2e8..153275a51b 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -357,6 +357,8 @@ enum iavf_tx_func_type { IAVF_TX_DEFAULT, IAVF_TX_AVX2, IAVF_TX_AVX2_OFFLOAD, + IAVF_TX_AVX2_CTX, + IAVF_TX_AVX2_CTX_OFFLOAD, IAVF_TX_AVX512, IAVF_TX_AVX512_OFFLOAD, IAVF_TX_AVX512_CTX, diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index 4ff6c18dc4..6079d2e030 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -3616,6 +3616,24 @@ static const struct ci_tx_path_info iavf_tx_path_infos[] = { .simd_width = RTE_VECT_SIMD_256 } }, + [IAVF_TX_AVX2_CTX] = { + .pkt_burst = iavf_xmit_pkts_vec_avx2_ctx, + .info = "Vector AVX2 Ctx", + .features = { + .tx_offloads = IAVF_TX_VECTOR_OFFLOADS, + .simd_width = RTE_VECT_SIMD_256, + .ctx_desc = true + } + }, + [IAVF_TX_AVX2_CTX_OFFLOAD] = { + .pkt_burst = iavf_xmit_pkts_vec_avx2_ctx_offload, + .info = "Vector AVX2 Ctx Offload", + .features = { + .tx_offloads = IAVF_TX_VECTOR_CTX_OFFLOAD_OFFLOADS, + .simd_width = RTE_VECT_SIMD_256, + .ctx_desc = true + } + }, #ifdef CC_AVX512_SUPPORT [IAVF_TX_AVX512] = { .pkt_burst = iavf_xmit_pkts_vec_avx512, diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h index 1db1267eec..6729cd4d45 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.h +++ b/drivers/net/intel/iavf/iavf_rxtx.h @@ -587,6 +587,10 @@ uint16_t iavf_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t iavf_xmit_pkts_vec_avx2_offload(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t iavf_xmit_pkts_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +uint16_t iavf_xmit_pkts_vec_avx2_ctx_offload(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); int iavf_rx_vec_dev_check(struct rte_eth_dev *dev); int iavf_tx_vec_dev_check(struct rte_eth_dev *dev); diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c index db0462f0f5..2e7fe96d2b 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c @@ -1763,6 +1763,392 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_pkts; } +static inline void +iavf_fill_ctx_desc_tunneling_avx2(uint64_t *low_ctx_qw, struct rte_mbuf *pkt) +{ + if (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + uint64_t eip_typ = IAVF_TX_CTX_DESC_EIPT_NONE; + uint64_t eip_len = 0; + uint64_t eip_noinc = 0; + /* Default - IP_ID is increment in each segment of LSO */ + + switch (pkt->ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | + RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM)) { + case RTE_MBUF_F_TX_OUTER_IPV4: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_NO_CHECKSUM_OFFLOAD; + eip_len = pkt->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_CHECKSUM_OFFLOAD; + eip_len = pkt->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV6: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV6; + eip_len = pkt->outer_l3_len >> 2; + break; + } + + /* L4TUNT: L4 Tunneling Type */ + switch (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (pkt->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if ((eip_typ & (IAVF_TX_CTX_EXT_IP_IPV4 | + IAVF_TX_CTX_EXT_IP_IPV6 | + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM)) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) && + (pkt->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + + *low_ctx_qw = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | + eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | + eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT; + + } else { + *low_ctx_qw = 0; + } +} + +static inline void +iavf_fill_ctx_desc_tunneling_field(volatile uint64_t *qw0, + const struct rte_mbuf *m) +{ + uint64_t eip_typ = IAVF_TX_CTX_DESC_EIPT_NONE; + uint64_t eip_len = 0; + uint64_t eip_noinc = 0; + /* Default - IP_ID is increment in each segment of LSO */ + + switch (m->ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | + RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM)) { + case RTE_MBUF_F_TX_OUTER_IPV4: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_NO_CHECKSUM_OFFLOAD; + eip_len = m->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_CHECKSUM_OFFLOAD; + eip_len = m->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV6: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV6; + eip_len = m->outer_l3_len >> 2; + break; + } + + /* L4TUNT: L4 Tunneling Type */ + switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (m->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if ((eip_typ & (IAVF_TX_CTX_EXT_IP_IPV6 | + IAVF_TX_CTX_EXT_IP_IPV4 | + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM)) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) && + (m->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + + *qw0 = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | + eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | + eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT; +} + +static __rte_always_inline void +ctx_vtx1(volatile struct ci_tx_desc *txdp, struct rte_mbuf *pkt, + uint64_t flags, bool offload, uint8_t vlan_flag) +{ + uint64_t high_ctx_qw = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t low_ctx_qw = 0; + + if (offload) { + iavf_fill_ctx_desc_tunneling_avx2(&low_ctx_qw, pkt); +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (pkt->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt->vlan_tci_outer : + (uint64_t)pkt->vlan_tci; + high_ctx_qw |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + high_ctx_qw |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw |= (uint64_t)pkt->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } +#endif + } + if (IAVF_CHECK_TX_LLDP(pkt)) + high_ctx_qw |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + uint64_t high_data_qw = (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT) | + ((uint64_t)pkt->data_len << IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)); + if (offload) + iavf_txd_enable_offload(pkt, &high_data_qw, vlan_flag); + + __m256i ctx_data_desc = _mm256_set_epi64x(high_data_qw, pkt->buf_iova + pkt->data_off, + high_ctx_qw, low_ctx_qw); + + _mm256_storeu_si256(RTE_CAST_PTR(__m256i *, txdp), ctx_data_desc); +} + +static __rte_always_inline void +ctx_vtx(volatile struct ci_tx_desc *txdp, + struct rte_mbuf **pkt, uint16_t nb_pkts, uint64_t flags, + bool offload, uint8_t vlan_flag) +{ + uint64_t hi_data_qw_tmpl = (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT)); + + /* if unaligned on 32-bit boundary, do one to align */ + if (((uintptr_t)txdp & 0x1F) != 0 && nb_pkts != 0) { + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag); + nb_pkts--, txdp++, pkt++; + } + + for (; nb_pkts > 1; txdp += 4, pkt += 2, nb_pkts -= 2) { + uint64_t hi_ctx_qw1 = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t hi_ctx_qw0 = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t low_ctx_qw1 = 0; + uint64_t low_ctx_qw0 = 0; + uint64_t hi_data_qw1 = 0; + uint64_t hi_data_qw0 = 0; + + hi_data_qw1 = hi_data_qw_tmpl | + ((uint64_t)pkt[1]->data_len << + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT); + hi_data_qw0 = hi_data_qw_tmpl | + ((uint64_t)pkt[0]->data_len << + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT); + +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (offload) { + if (pkt[1]->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt[1]->vlan_tci_outer : + (uint64_t)pkt[1]->vlan_tci; + hi_ctx_qw1 |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw1 |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if (pkt[1]->ol_flags & RTE_MBUF_F_TX_VLAN && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + hi_ctx_qw1 |= + IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw1 |= + (uint64_t)pkt[1]->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } + } +#endif + if (IAVF_CHECK_TX_LLDP(pkt[1])) + hi_ctx_qw1 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (offload) { + if (pkt[0]->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt[0]->vlan_tci_outer : + (uint64_t)pkt[0]->vlan_tci; + hi_ctx_qw0 |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw0 |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if (pkt[0]->ol_flags & RTE_MBUF_F_TX_VLAN && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + hi_ctx_qw0 |= + IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw0 |= + (uint64_t)pkt[0]->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } + } +#endif + if (IAVF_CHECK_TX_LLDP(pkt[0])) + hi_ctx_qw0 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + + if (offload) { + iavf_txd_enable_offload(pkt[1], &hi_data_qw1, vlan_flag); + iavf_txd_enable_offload(pkt[0], &hi_data_qw0, vlan_flag); + iavf_fill_ctx_desc_tunneling_field(&low_ctx_qw1, pkt[1]); + iavf_fill_ctx_desc_tunneling_field(&low_ctx_qw0, pkt[0]); + } + + __m256i desc2_3 = + _mm256_set_epi64x + (hi_data_qw1, pkt[1]->buf_iova + pkt[1]->data_off, + hi_ctx_qw1, low_ctx_qw1); + __m256i desc0_1 = + _mm256_set_epi64x + (hi_data_qw0, pkt[0]->buf_iova + pkt[0]->data_off, + hi_ctx_qw0, low_ctx_qw0); + _mm256_store_si256(RTE_CAST_PTR(__m256i *, txdp + 2), desc2_3); + _mm256_store_si256(RTE_CAST_PTR(__m256i *, txdp), desc0_1); + } + + if (nb_pkts) + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag); +} + +static __rte_always_inline uint16_t +iavf_xmit_fixed_burst_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts, bool offload) +{ + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; + volatile struct ci_tx_desc *txdp; + struct ci_tx_entry_vec *txep; + uint16_t n, nb_commit, nb_mbuf, tx_id; + /* bit2 is reserved and must be set to 1 according to Spec */ + uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; + uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; + + if (txq->nb_tx_free < txq->tx_free_thresh) + ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true); + + nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1); + nb_commit &= 0xFFFE; + if (unlikely(nb_commit == 0)) + return 0; + + nb_pkts = nb_commit >> 1; + tx_id = txq->tx_tail; + txdp = &txq->ci_tx_ring[tx_id]; + txep = (void *)txq->sw_ring; + txep += (tx_id >> 1); + + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit); + n = (uint16_t)(txq->nb_tx_desc - tx_id); + + if (n != 0 && nb_commit >= n) { + nb_mbuf = n >> 1; + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_mbuf); + + ctx_vtx(txdp, tx_pkts, nb_mbuf - 1, flags, offload, txq->vlan_flag); + tx_pkts += (nb_mbuf - 1); + txdp += (n - 2); + ctx_vtx1(txdp, *tx_pkts++, rs, offload, txq->vlan_flag); + + nb_commit = (uint16_t)(nb_commit - n); + + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); + tx_id = 0; + /* avoid reach the end of ring */ + txdp = txq->ci_tx_ring; + txep = (void *)txq->sw_ring; + } + + nb_mbuf = nb_commit >> 1; + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_mbuf); + + ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag); + tx_id = (uint16_t)(tx_id + nb_commit); + + if (tx_id > txq->tx_next_rs) { + txq->ci_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << + IAVF_TXD_QW1_CMD_SHIFT); + txq->tx_next_rs = + (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); + } + + txq->tx_tail = tx_id; + + IAVF_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail); + return nb_pkts; +} + +static __rte_always_inline uint16_t +iavf_xmit_pkts_vec_avx2_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts, bool offload) +{ + uint16_t nb_tx = 0; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; + + while (nb_pkts) { + uint16_t ret, num; + + /* cross rs_thresh boundary is not allowed */ + num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh); + num = num >> 1; + ret = iavf_xmit_fixed_burst_vec_avx2_ctx(tx_queue, &tx_pkts[nb_tx], + num, offload); + nb_tx += ret; + nb_pkts -= ret; + if (ret < num) + break; + } + + return nb_tx; +} + +uint16_t +iavf_xmit_pkts_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return iavf_xmit_pkts_vec_avx2_ctx_cmn(tx_queue, tx_pkts, nb_pkts, false); +} + +uint16_t +iavf_xmit_pkts_vec_avx2_ctx_offload(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return iavf_xmit_pkts_vec_avx2_ctx_cmn(tx_queue, tx_pkts, nb_pkts, true); +} + static __rte_always_inline uint16_t iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 3/3] doc: announce change to LLDP packet detection in iavf PMD 2026-03-06 11:52 ` [PATCH v2 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 1/3] net/iavf: support LLDP Tx based on mbuf ptype or dynfield Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus @ 2026-03-06 11:52 ` Ciara Loftus 2026-04-17 10:08 ` [PATCH v3 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 3 siblings, 0 replies; 25+ messages in thread From: Ciara Loftus @ 2026-03-06 11:52 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus The iavf PMD transmit paths currently use two methods to identify if an mbuf holds an LLDP packet: a dynamic mbuf field and the mbuf ptype. A future release will remove the dynamic mbuf field approach at which point the ptype will be the only approach used. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- doc/guides/rel_notes/deprecation.rst | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index ac667e91a6..e450f9e56d 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -139,3 +139,7 @@ Deprecation Notices * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to DPDK. Those API functions are used internally by DPDK core and netvsc PMD. + +* net/iavf: The method of detecting an LLDP packet on the transmit path + in the iavf PMD will be changed. Instead of using either a dynamic mbuf + field or the mbuf ptype, only the mbuf ptype will be used. -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v3 0/3] iavf: LLDP ptype and AVX2 ctx paths 2026-03-06 11:52 ` [PATCH v2 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus ` (2 preceding siblings ...) 2026-03-06 11:52 ` [PATCH v2 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus @ 2026-04-17 10:08 ` Ciara Loftus 2026-04-17 10:08 ` [PATCH v3 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus ` (3 more replies) 3 siblings, 4 replies; 25+ messages in thread From: Ciara Loftus @ 2026-04-17 10:08 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus This series adds another way to detect LLDP packets in the Tx paths of the iavf PMD, which is based on the mbuf packet type field. This is in addition to the existing method of using a dynamic field. The series also adds new AVX2 context descriptor Tx paths that support LLDP. Finally, a deprecation notice is added to the documentation to flag that the dynamic mbuf field method of LLDP packet detection will be removed in a future release. v3: * Enable the ptype support via a new devarg enable_ptype_lldp Ciara Loftus (3): net/iavf: support LLDP Tx via mbuf ptype or dynfield net/iavf: add AVX2 context descriptor Tx paths doc: announce change to LLDP packet detection in iavf PMD doc/guides/nics/intel_vf.rst | 21 +- doc/guides/rel_notes/deprecation.rst | 4 + doc/guides/rel_notes/release_26_07.rst | 6 + drivers/net/intel/iavf/iavf.h | 3 + drivers/net/intel/iavf/iavf_ethdev.c | 15 + drivers/net/intel/iavf/iavf_rxtx.c | 20 +- drivers/net/intel/iavf/iavf_rxtx.h | 13 +- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 386 ++++++++++++++++++++ 8 files changed, 458 insertions(+), 10 deletions(-) -- 2.43.0 ^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v3 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield 2026-04-17 10:08 ` [PATCH v3 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus @ 2026-04-17 10:08 ` Ciara Loftus 2026-04-17 10:57 ` Bruce Richardson 2026-04-17 10:08 ` [PATCH v3 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus ` (2 subsequent siblings) 3 siblings, 1 reply; 25+ messages in thread From: Ciara Loftus @ 2026-04-17 10:08 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus Previously, the only way to transmit LLDP packets via the iavf PMD was to register the IAVF_TX_LLDP_DYNFIELD dynamic mbuf field and set it to a non-zero value on each LLDP mbuf before Tx. This patch adds an alternative. If the new devarg `enable_ptype_lldp` is set to 1, and if the mbuf packet type is set to RTE_PTYPE_L2_ETHER_LLDP then a Tx path with context descriptor support will be selected and any packets with the LLDP ptype will be transmitted. The dynamic mbuf field support is still present however it is intended that it will be removed in a future release, at which point only the packet type approach will be supported. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- v3: * Enable ptype LLDP via new devarg --- doc/guides/nics/intel_vf.rst | 21 ++++++++++++++++----- doc/guides/rel_notes/release_26_07.rst | 5 +++++ drivers/net/intel/iavf/iavf.h | 1 + drivers/net/intel/iavf/iavf_ethdev.c | 15 +++++++++++++++ drivers/net/intel/iavf/iavf_rxtx.c | 2 +- drivers/net/intel/iavf/iavf_rxtx.h | 9 +++++---- 6 files changed, 43 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index 5fa2ddc9ea..4b65fa50cc 100644 --- a/doc/guides/nics/intel_vf.rst +++ b/doc/guides/nics/intel_vf.rst @@ -675,12 +675,14 @@ Inline IPsec Support Diagnostic Utilities -------------------- -Register mbuf dynfield to test Tx LLDP -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Tx LLDP Testing +~~~~~~~~~~~~~~~ -Register an mbuf dynfield ``IAVF_TX_LLDP_DYNFIELD`` on ``dev_start`` -to indicate the need to send LLDP packet. -This dynfield needs to be set to 1 when preparing packet. +There are two methods to trigger LLDP packet transmission from the VF. + +The first method is to register an mbuf dynfield ``IAVF_TX_LLDP_DYNFIELD`` before +``dev_start``. This dynfield needs to be set to 1 when preparing an LLDP packet +intended for transmission. For ``dpdk-testpmd`` application, it needs to stop and restart Tx port to take effect. @@ -688,6 +690,15 @@ Usage:: testpmd> set tx lldp on +An alternative method for transmitting LLDP packets is to set the ``packet_type`` of +the mbuf to ``RTE_PTYPE_L2_ETHER_LLDP``. This, in conjunction with enabling the +``enable_ptype_lldp`` devarg will cause such packets to be transmitted:: + + -a 0000:xx:xx.x,enable_ptype_lldp=1 + +When ``enable_ptype_lldp`` is not set (default), ptype-based LLDP detection is +disabled, but LLDP transmission via the dynamic mbuf field remains available. + Limitations or Knowing issues ----------------------------- diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index 060b26ff61..e0b27a554a 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -56,6 +56,11 @@ New Features ======================================================= +* **Updated Intel iavf driver.** + + * Added support for transmitting LLDP packets based on mbuf packet type. + + Removed Items ------------- diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index f8008d0fda..ef503a1b64 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -323,6 +323,7 @@ struct iavf_devargs { int auto_reconfig; int no_poll_on_link_down; uint64_t mbuf_check; + int enable_ptype_lldp; }; struct iavf_security_ctx; diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c index 3126d9b644..77e8c6b54b 100644 --- a/drivers/net/intel/iavf/iavf_ethdev.c +++ b/drivers/net/intel/iavf/iavf_ethdev.c @@ -44,9 +44,11 @@ #define IAVF_ENABLE_AUTO_RECONFIG_ARG "auto_reconfig" #define IAVF_NO_POLL_ON_LINK_DOWN_ARG "no-poll-on-link-down" #define IAVF_MBUF_CHECK_ARG "mbuf_check" +#define IAVF_ENABLE_PTYPE_LLDP_ARG "enable_ptype_lldp" uint64_t iavf_timestamp_dynflag; int iavf_timestamp_dynfield_offset = -1; int rte_pmd_iavf_tx_lldp_dynfield_offset = -1; +bool iavf_ptype_lldp_enabled; static const char * const iavf_valid_args[] = { IAVF_PROTO_XTR_ARG, @@ -56,6 +58,7 @@ static const char * const iavf_valid_args[] = { IAVF_ENABLE_AUTO_RECONFIG_ARG, IAVF_NO_POLL_ON_LINK_DOWN_ARG, IAVF_MBUF_CHECK_ARG, + IAVF_ENABLE_PTYPE_LLDP_ARG, NULL }; @@ -1016,6 +1019,11 @@ iavf_dev_start(struct rte_eth_dev *dev) /* Check Tx LLDP dynfield */ rte_pmd_iavf_tx_lldp_dynfield_offset = rte_mbuf_dynfield_lookup(IAVF_TX_LLDP_DYNFIELD, NULL); + if (rte_pmd_iavf_tx_lldp_dynfield_offset > 0) + PMD_DRV_LOG(WARNING, + "The LLDP Tx dynamic mbuf field will be removed in a future release."); + iavf_ptype_lldp_enabled = adapter->devargs.enable_ptype_lldp || + rte_pmd_iavf_tx_lldp_dynfield_offset > 0; if (iavf_init_queues(dev) != 0) { PMD_DRV_LOG(ERR, "failed to do Queue init"); @@ -2445,6 +2453,11 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev) if (ret) goto bail; + ret = rte_kvargs_process(kvlist, IAVF_ENABLE_PTYPE_LLDP_ARG, + &parse_bool, &ad->devargs.enable_ptype_lldp); + if (ret) + goto bail; + bail: rte_kvargs_free(kvlist); return ret; @@ -2795,6 +2808,8 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) */ rte_pmd_iavf_tx_lldp_dynfield_offset = rte_mbuf_dynfield_lookup(IAVF_TX_LLDP_DYNFIELD, NULL); + iavf_ptype_lldp_enabled = adapter->devargs.enable_ptype_lldp || + rte_pmd_iavf_tx_lldp_dynfield_offset > 0; iavf_set_tx_function(eth_dev); return 0; } diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index 4ff6c18dc4..bb62ab07de 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -3886,7 +3886,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) if (iavf_tx_vec_dev_check(dev) != -1) req_features.simd_width = iavf_get_max_simd_bitwidth(); - if (rte_pmd_iavf_tx_lldp_dynfield_offset > 0) + if (iavf_ptype_lldp_enabled) req_features.ctx_desc = true; for (i = 0; i < dev->data->nb_tx_queues; i++) { diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h index 80b06518b0..0157c4c37e 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.h +++ b/drivers/net/intel/iavf/iavf_rxtx.h @@ -157,14 +157,15 @@ #define IAVF_TX_LLDP_DYNFIELD "intel_pmd_dynfield_tx_lldp" #define IAVF_CHECK_TX_LLDP(m) \ - ((rte_pmd_iavf_tx_lldp_dynfield_offset > 0) && \ - (*RTE_MBUF_DYNFIELD((m), \ - rte_pmd_iavf_tx_lldp_dynfield_offset, \ - uint8_t *))) + (iavf_ptype_lldp_enabled && \ + (((m)->packet_type & RTE_PTYPE_L2_MASK) == RTE_PTYPE_L2_ETHER_LLDP || \ + (rte_pmd_iavf_tx_lldp_dynfield_offset > 0 && \ + *RTE_MBUF_DYNFIELD((m), rte_pmd_iavf_tx_lldp_dynfield_offset, uint8_t *)))) extern uint64_t iavf_timestamp_dynflag; extern int iavf_timestamp_dynfield_offset; extern int rte_pmd_iavf_tx_lldp_dynfield_offset; +extern bool iavf_ptype_lldp_enabled; typedef void (*iavf_rxd_to_pkt_fields_t)(struct ci_rx_queue *rxq, struct rte_mbuf *mb, -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v3 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield 2026-04-17 10:08 ` [PATCH v3 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus @ 2026-04-17 10:57 ` Bruce Richardson 2026-04-22 9:19 ` Loftus, Ciara 0 siblings, 1 reply; 25+ messages in thread From: Bruce Richardson @ 2026-04-17 10:57 UTC (permalink / raw) To: Ciara Loftus; +Cc: dev On Fri, Apr 17, 2026 at 10:08:02AM +0000, Ciara Loftus wrote: > Previously, the only way to transmit LLDP packets via the iavf PMD > was to register the IAVF_TX_LLDP_DYNFIELD dynamic mbuf field and set > it to a non-zero value on each LLDP mbuf before Tx. > > This patch adds an alternative. If the new devarg `enable_ptype_lldp` is > set to 1, and if the mbuf packet type is set to RTE_PTYPE_L2_ETHER_LLDP > then a Tx path with context descriptor support will be selected and any > packets with the LLDP ptype will be transmitted. > > The dynamic mbuf field support is still present however it is intended > that it will be removed in a future release, at which point only the > packet type approach will be supported. > > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> > --- > v3: > * Enable ptype LLDP via new devarg > --- Some feedback around documentation inline below. /Bruce > doc/guides/nics/intel_vf.rst | 21 ++++++++++++++++----- > doc/guides/rel_notes/release_26_07.rst | 5 +++++ > drivers/net/intel/iavf/iavf.h | 1 + > drivers/net/intel/iavf/iavf_ethdev.c | 15 +++++++++++++++ > drivers/net/intel/iavf/iavf_rxtx.c | 2 +- > drivers/net/intel/iavf/iavf_rxtx.h | 9 +++++---- > 6 files changed, 43 insertions(+), 10 deletions(-) > > diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst > index 5fa2ddc9ea..4b65fa50cc 100644 > --- a/doc/guides/nics/intel_vf.rst > +++ b/doc/guides/nics/intel_vf.rst > @@ -675,12 +675,14 @@ Inline IPsec Support > Diagnostic Utilities > -------------------- > > -Register mbuf dynfield to test Tx LLDP > -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > +Tx LLDP Testing > +~~~~~~~~~~~~~~~ > > -Register an mbuf dynfield ``IAVF_TX_LLDP_DYNFIELD`` on ``dev_start`` > -to indicate the need to send LLDP packet. > -This dynfield needs to be set to 1 when preparing packet. > +There are two methods to trigger LLDP packet transmission from the VF. > + > +The first method is to register an mbuf dynfield ``IAVF_TX_LLDP_DYNFIELD`` before > +``dev_start``. This dynfield needs to be set to 1 when preparing an LLDP packet > +intended for transmission. > Switch the order of the methods in the docs. We should always start with the best/recommended method first, so the user doesn't have to read through the less-recommended methods in order to get to the best one. Also not when describing the dyn-field method that it is deprecated and will be removed in future. > For ``dpdk-testpmd`` application, it needs to stop and restart Tx port to take effect. > > @@ -688,6 +690,15 @@ Usage:: > > testpmd> set tx lldp on > > +An alternative method for transmitting LLDP packets is to set the ``packet_type`` of > +the mbuf to ``RTE_PTYPE_L2_ETHER_LLDP``. This, in conjunction with enabling the > +``enable_ptype_lldp`` devarg will cause such packets to be transmitted:: > + > + -a 0000:xx:xx.x,enable_ptype_lldp=1 > + > +When ``enable_ptype_lldp`` is not set (default), ptype-based LLDP detection is > +disabled, but LLDP transmission via the dynamic mbuf field remains available. > + The previous method describes testpmd use. Do we need such a description here too? > > Limitations or Knowing issues > ----------------------------- > diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst > index 060b26ff61..e0b27a554a 100644 > --- a/doc/guides/rel_notes/release_26_07.rst > +++ b/doc/guides/rel_notes/release_26_07.rst > @@ -56,6 +56,11 @@ New Features > ======================================================= > > > +* **Updated Intel iavf driver.** > + > + * Added support for transmitting LLDP packets based on mbuf packet type. > + > + > Removed Items > ------------- > > diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h > index f8008d0fda..ef503a1b64 100644 > --- a/drivers/net/intel/iavf/iavf.h > +++ b/drivers/net/intel/iavf/iavf.h > @@ -323,6 +323,7 @@ struct iavf_devargs { > int auto_reconfig; > int no_poll_on_link_down; > uint64_t mbuf_check; > + int enable_ptype_lldp; > }; > > struct iavf_security_ctx; > diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c > index 3126d9b644..77e8c6b54b 100644 > --- a/drivers/net/intel/iavf/iavf_ethdev.c > +++ b/drivers/net/intel/iavf/iavf_ethdev.c > @@ -44,9 +44,11 @@ > #define IAVF_ENABLE_AUTO_RECONFIG_ARG "auto_reconfig" > #define IAVF_NO_POLL_ON_LINK_DOWN_ARG "no-poll-on-link-down" > #define IAVF_MBUF_CHECK_ARG "mbuf_check" > +#define IAVF_ENABLE_PTYPE_LLDP_ARG "enable_ptype_lldp" > uint64_t iavf_timestamp_dynflag; > int iavf_timestamp_dynfield_offset = -1; > int rte_pmd_iavf_tx_lldp_dynfield_offset = -1; > +bool iavf_ptype_lldp_enabled; > > static const char * const iavf_valid_args[] = { > IAVF_PROTO_XTR_ARG, > @@ -56,6 +58,7 @@ static const char * const iavf_valid_args[] = { > IAVF_ENABLE_AUTO_RECONFIG_ARG, > IAVF_NO_POLL_ON_LINK_DOWN_ARG, > IAVF_MBUF_CHECK_ARG, > + IAVF_ENABLE_PTYPE_LLDP_ARG, > NULL > }; > > @@ -1016,6 +1019,11 @@ iavf_dev_start(struct rte_eth_dev *dev) > /* Check Tx LLDP dynfield */ > rte_pmd_iavf_tx_lldp_dynfield_offset = > rte_mbuf_dynfield_lookup(IAVF_TX_LLDP_DYNFIELD, NULL); > + if (rte_pmd_iavf_tx_lldp_dynfield_offset > 0) > + PMD_DRV_LOG(WARNING, > + "The LLDP Tx dynamic mbuf field will be removed in a future release."); I think this message could do with being expanded and clarified, even if it is longer. For example: "Using a dynamic mbuf field to identify LLDP packets is deprecated. Set the 'enable_ptype_lldp' driver option and mbuf LLDP ptypes instead" > + iavf_ptype_lldp_enabled = adapter->devargs.enable_ptype_lldp || > + rte_pmd_iavf_tx_lldp_dynfield_offset > 0; > > if (iavf_init_queues(dev) != 0) { > PMD_DRV_LOG(ERR, "failed to do Queue init"); > @@ -2445,6 +2453,11 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev) > if (ret) > goto bail; > > + ret = rte_kvargs_process(kvlist, IAVF_ENABLE_PTYPE_LLDP_ARG, > + &parse_bool, &ad->devargs.enable_ptype_lldp); > + if (ret) > + goto bail; > + > bail: > rte_kvargs_free(kvlist); > return ret; > @@ -2795,6 +2808,8 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) > */ > rte_pmd_iavf_tx_lldp_dynfield_offset = > rte_mbuf_dynfield_lookup(IAVF_TX_LLDP_DYNFIELD, NULL); > + iavf_ptype_lldp_enabled = adapter->devargs.enable_ptype_lldp || > + rte_pmd_iavf_tx_lldp_dynfield_offset > 0; > iavf_set_tx_function(eth_dev); > return 0; > } > diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c > index 4ff6c18dc4..bb62ab07de 100644 > --- a/drivers/net/intel/iavf/iavf_rxtx.c > +++ b/drivers/net/intel/iavf/iavf_rxtx.c > @@ -3886,7 +3886,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) > if (iavf_tx_vec_dev_check(dev) != -1) > req_features.simd_width = iavf_get_max_simd_bitwidth(); > > - if (rte_pmd_iavf_tx_lldp_dynfield_offset > 0) > + if (iavf_ptype_lldp_enabled) > req_features.ctx_desc = true; > > for (i = 0; i < dev->data->nb_tx_queues; i++) { > diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h > index 80b06518b0..0157c4c37e 100644 > --- a/drivers/net/intel/iavf/iavf_rxtx.h > +++ b/drivers/net/intel/iavf/iavf_rxtx.h > @@ -157,14 +157,15 @@ > > #define IAVF_TX_LLDP_DYNFIELD "intel_pmd_dynfield_tx_lldp" > #define IAVF_CHECK_TX_LLDP(m) \ > - ((rte_pmd_iavf_tx_lldp_dynfield_offset > 0) && \ > - (*RTE_MBUF_DYNFIELD((m), \ > - rte_pmd_iavf_tx_lldp_dynfield_offset, \ > - uint8_t *))) > + (iavf_ptype_lldp_enabled && \ > + (((m)->packet_type & RTE_PTYPE_L2_MASK) == RTE_PTYPE_L2_ETHER_LLDP || \ > + (rte_pmd_iavf_tx_lldp_dynfield_offset > 0 && \ > + *RTE_MBUF_DYNFIELD((m), rte_pmd_iavf_tx_lldp_dynfield_offset, uint8_t *)))) > > extern uint64_t iavf_timestamp_dynflag; > extern int iavf_timestamp_dynfield_offset; > extern int rte_pmd_iavf_tx_lldp_dynfield_offset; > +extern bool iavf_ptype_lldp_enabled; > > typedef void (*iavf_rxd_to_pkt_fields_t)(struct ci_rx_queue *rxq, > struct rte_mbuf *mb, > -- > 2.43.0 > ^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: [PATCH v3 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield 2026-04-17 10:57 ` Bruce Richardson @ 2026-04-22 9:19 ` Loftus, Ciara 0 siblings, 0 replies; 25+ messages in thread From: Loftus, Ciara @ 2026-04-22 9:19 UTC (permalink / raw) To: Richardson, Bruce; +Cc: dev@dpdk.org > > @@ -688,6 +690,15 @@ Usage:: > > > > testpmd> set tx lldp on > > > > +An alternative method for transmitting LLDP packets is to set the > ``packet_type`` of > > +the mbuf to ``RTE_PTYPE_L2_ETHER_LLDP``. This, in conjunction with > enabling the > > +``enable_ptype_lldp`` devarg will cause such packets to be transmitted:: > > + > > + -a 0000:xx:xx.x,enable_ptype_lldp=1 > > + > > +When ``enable_ptype_lldp`` is not set (default), ptype-based LLDP detection > is > > +disabled, but LLDP transmission via the dynamic mbuf field remains > available. > > + > > The previous method describes testpmd use. Do we need such a description > here too? > > > The ptype method doesn't require any specific enabling steps in testpmd, it is enabled only via the devarg. I've updated the documentation to make that clearer. ^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v3 2/3] net/iavf: add AVX2 context descriptor Tx paths 2026-04-17 10:08 ` [PATCH v3 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-04-17 10:08 ` [PATCH v3 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus @ 2026-04-17 10:08 ` Ciara Loftus 2026-04-17 10:08 ` [PATCH v3 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus 2026-04-17 10:56 ` [PATCH v4 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 3 siblings, 0 replies; 25+ messages in thread From: Ciara Loftus @ 2026-04-17 10:08 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus Prior to this commit the transmit paths implemented for AVX2 used the transmit descriptor only, making some offloads and features unavailable on the AVX2 path, like LLDP. Enable two new AVX2 transmit paths, both of which support using a context descriptor, one which performs offload and the other which does not. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- doc/guides/rel_notes/release_26_07.rst | 1 + drivers/net/intel/iavf/iavf.h | 2 + drivers/net/intel/iavf/iavf_rxtx.c | 18 + drivers/net/intel/iavf/iavf_rxtx.h | 4 + drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 386 ++++++++++++++++++++ 5 files changed, 411 insertions(+) diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index e0b27a554a..6ecb0d4f38 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -59,6 +59,7 @@ New Features * **Updated Intel iavf driver.** * Added support for transmitting LLDP packets based on mbuf packet type. + * Implemented AVX2 context descriptor transmit paths. Removed Items diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index ef503a1b64..7c39d516a7 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -360,6 +360,8 @@ enum iavf_tx_func_type { IAVF_TX_DEFAULT, IAVF_TX_AVX2, IAVF_TX_AVX2_OFFLOAD, + IAVF_TX_AVX2_CTX, + IAVF_TX_AVX2_CTX_OFFLOAD, IAVF_TX_AVX512, IAVF_TX_AVX512_OFFLOAD, IAVF_TX_AVX512_CTX, diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index bb62ab07de..b0c9039bbf 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -3616,6 +3616,24 @@ static const struct ci_tx_path_info iavf_tx_path_infos[] = { .simd_width = RTE_VECT_SIMD_256 } }, + [IAVF_TX_AVX2_CTX] = { + .pkt_burst = iavf_xmit_pkts_vec_avx2_ctx, + .info = "Vector AVX2 Ctx", + .features = { + .tx_offloads = IAVF_TX_VECTOR_OFFLOADS, + .simd_width = RTE_VECT_SIMD_256, + .ctx_desc = true + } + }, + [IAVF_TX_AVX2_CTX_OFFLOAD] = { + .pkt_burst = iavf_xmit_pkts_vec_avx2_ctx_offload, + .info = "Vector AVX2 Ctx Offload", + .features = { + .tx_offloads = IAVF_TX_VECTOR_CTX_OFFLOAD_OFFLOADS, + .simd_width = RTE_VECT_SIMD_256, + .ctx_desc = true + } + }, #ifdef CC_AVX512_SUPPORT [IAVF_TX_AVX512] = { .pkt_burst = iavf_xmit_pkts_vec_avx512, diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h index 0157c4c37e..d4f0f4240c 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.h +++ b/drivers/net/intel/iavf/iavf_rxtx.h @@ -589,6 +589,10 @@ uint16_t iavf_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t iavf_xmit_pkts_vec_avx2_offload(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t iavf_xmit_pkts_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +uint16_t iavf_xmit_pkts_vec_avx2_ctx_offload(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); int iavf_rx_vec_dev_check(struct rte_eth_dev *dev); int iavf_tx_vec_dev_check(struct rte_eth_dev *dev); diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c index db0462f0f5..2e7fe96d2b 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c @@ -1763,6 +1763,392 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_pkts; } +static inline void +iavf_fill_ctx_desc_tunneling_avx2(uint64_t *low_ctx_qw, struct rte_mbuf *pkt) +{ + if (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + uint64_t eip_typ = IAVF_TX_CTX_DESC_EIPT_NONE; + uint64_t eip_len = 0; + uint64_t eip_noinc = 0; + /* Default - IP_ID is increment in each segment of LSO */ + + switch (pkt->ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | + RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM)) { + case RTE_MBUF_F_TX_OUTER_IPV4: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_NO_CHECKSUM_OFFLOAD; + eip_len = pkt->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_CHECKSUM_OFFLOAD; + eip_len = pkt->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV6: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV6; + eip_len = pkt->outer_l3_len >> 2; + break; + } + + /* L4TUNT: L4 Tunneling Type */ + switch (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (pkt->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if ((eip_typ & (IAVF_TX_CTX_EXT_IP_IPV4 | + IAVF_TX_CTX_EXT_IP_IPV6 | + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM)) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) && + (pkt->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + + *low_ctx_qw = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | + eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | + eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT; + + } else { + *low_ctx_qw = 0; + } +} + +static inline void +iavf_fill_ctx_desc_tunneling_field(volatile uint64_t *qw0, + const struct rte_mbuf *m) +{ + uint64_t eip_typ = IAVF_TX_CTX_DESC_EIPT_NONE; + uint64_t eip_len = 0; + uint64_t eip_noinc = 0; + /* Default - IP_ID is increment in each segment of LSO */ + + switch (m->ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | + RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM)) { + case RTE_MBUF_F_TX_OUTER_IPV4: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_NO_CHECKSUM_OFFLOAD; + eip_len = m->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_CHECKSUM_OFFLOAD; + eip_len = m->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV6: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV6; + eip_len = m->outer_l3_len >> 2; + break; + } + + /* L4TUNT: L4 Tunneling Type */ + switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (m->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if ((eip_typ & (IAVF_TX_CTX_EXT_IP_IPV6 | + IAVF_TX_CTX_EXT_IP_IPV4 | + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM)) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) && + (m->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + + *qw0 = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | + eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | + eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT; +} + +static __rte_always_inline void +ctx_vtx1(volatile struct ci_tx_desc *txdp, struct rte_mbuf *pkt, + uint64_t flags, bool offload, uint8_t vlan_flag) +{ + uint64_t high_ctx_qw = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t low_ctx_qw = 0; + + if (offload) { + iavf_fill_ctx_desc_tunneling_avx2(&low_ctx_qw, pkt); +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (pkt->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt->vlan_tci_outer : + (uint64_t)pkt->vlan_tci; + high_ctx_qw |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + high_ctx_qw |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw |= (uint64_t)pkt->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } +#endif + } + if (IAVF_CHECK_TX_LLDP(pkt)) + high_ctx_qw |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + uint64_t high_data_qw = (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT) | + ((uint64_t)pkt->data_len << IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)); + if (offload) + iavf_txd_enable_offload(pkt, &high_data_qw, vlan_flag); + + __m256i ctx_data_desc = _mm256_set_epi64x(high_data_qw, pkt->buf_iova + pkt->data_off, + high_ctx_qw, low_ctx_qw); + + _mm256_storeu_si256(RTE_CAST_PTR(__m256i *, txdp), ctx_data_desc); +} + +static __rte_always_inline void +ctx_vtx(volatile struct ci_tx_desc *txdp, + struct rte_mbuf **pkt, uint16_t nb_pkts, uint64_t flags, + bool offload, uint8_t vlan_flag) +{ + uint64_t hi_data_qw_tmpl = (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT)); + + /* if unaligned on 32-bit boundary, do one to align */ + if (((uintptr_t)txdp & 0x1F) != 0 && nb_pkts != 0) { + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag); + nb_pkts--, txdp++, pkt++; + } + + for (; nb_pkts > 1; txdp += 4, pkt += 2, nb_pkts -= 2) { + uint64_t hi_ctx_qw1 = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t hi_ctx_qw0 = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t low_ctx_qw1 = 0; + uint64_t low_ctx_qw0 = 0; + uint64_t hi_data_qw1 = 0; + uint64_t hi_data_qw0 = 0; + + hi_data_qw1 = hi_data_qw_tmpl | + ((uint64_t)pkt[1]->data_len << + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT); + hi_data_qw0 = hi_data_qw_tmpl | + ((uint64_t)pkt[0]->data_len << + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT); + +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (offload) { + if (pkt[1]->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt[1]->vlan_tci_outer : + (uint64_t)pkt[1]->vlan_tci; + hi_ctx_qw1 |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw1 |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if (pkt[1]->ol_flags & RTE_MBUF_F_TX_VLAN && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + hi_ctx_qw1 |= + IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw1 |= + (uint64_t)pkt[1]->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } + } +#endif + if (IAVF_CHECK_TX_LLDP(pkt[1])) + hi_ctx_qw1 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (offload) { + if (pkt[0]->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt[0]->vlan_tci_outer : + (uint64_t)pkt[0]->vlan_tci; + hi_ctx_qw0 |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw0 |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if (pkt[0]->ol_flags & RTE_MBUF_F_TX_VLAN && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + hi_ctx_qw0 |= + IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw0 |= + (uint64_t)pkt[0]->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } + } +#endif + if (IAVF_CHECK_TX_LLDP(pkt[0])) + hi_ctx_qw0 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + + if (offload) { + iavf_txd_enable_offload(pkt[1], &hi_data_qw1, vlan_flag); + iavf_txd_enable_offload(pkt[0], &hi_data_qw0, vlan_flag); + iavf_fill_ctx_desc_tunneling_field(&low_ctx_qw1, pkt[1]); + iavf_fill_ctx_desc_tunneling_field(&low_ctx_qw0, pkt[0]); + } + + __m256i desc2_3 = + _mm256_set_epi64x + (hi_data_qw1, pkt[1]->buf_iova + pkt[1]->data_off, + hi_ctx_qw1, low_ctx_qw1); + __m256i desc0_1 = + _mm256_set_epi64x + (hi_data_qw0, pkt[0]->buf_iova + pkt[0]->data_off, + hi_ctx_qw0, low_ctx_qw0); + _mm256_store_si256(RTE_CAST_PTR(__m256i *, txdp + 2), desc2_3); + _mm256_store_si256(RTE_CAST_PTR(__m256i *, txdp), desc0_1); + } + + if (nb_pkts) + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag); +} + +static __rte_always_inline uint16_t +iavf_xmit_fixed_burst_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts, bool offload) +{ + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; + volatile struct ci_tx_desc *txdp; + struct ci_tx_entry_vec *txep; + uint16_t n, nb_commit, nb_mbuf, tx_id; + /* bit2 is reserved and must be set to 1 according to Spec */ + uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; + uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; + + if (txq->nb_tx_free < txq->tx_free_thresh) + ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true); + + nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1); + nb_commit &= 0xFFFE; + if (unlikely(nb_commit == 0)) + return 0; + + nb_pkts = nb_commit >> 1; + tx_id = txq->tx_tail; + txdp = &txq->ci_tx_ring[tx_id]; + txep = (void *)txq->sw_ring; + txep += (tx_id >> 1); + + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit); + n = (uint16_t)(txq->nb_tx_desc - tx_id); + + if (n != 0 && nb_commit >= n) { + nb_mbuf = n >> 1; + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_mbuf); + + ctx_vtx(txdp, tx_pkts, nb_mbuf - 1, flags, offload, txq->vlan_flag); + tx_pkts += (nb_mbuf - 1); + txdp += (n - 2); + ctx_vtx1(txdp, *tx_pkts++, rs, offload, txq->vlan_flag); + + nb_commit = (uint16_t)(nb_commit - n); + + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); + tx_id = 0; + /* avoid reach the end of ring */ + txdp = txq->ci_tx_ring; + txep = (void *)txq->sw_ring; + } + + nb_mbuf = nb_commit >> 1; + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_mbuf); + + ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag); + tx_id = (uint16_t)(tx_id + nb_commit); + + if (tx_id > txq->tx_next_rs) { + txq->ci_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << + IAVF_TXD_QW1_CMD_SHIFT); + txq->tx_next_rs = + (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); + } + + txq->tx_tail = tx_id; + + IAVF_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail); + return nb_pkts; +} + +static __rte_always_inline uint16_t +iavf_xmit_pkts_vec_avx2_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts, bool offload) +{ + uint16_t nb_tx = 0; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; + + while (nb_pkts) { + uint16_t ret, num; + + /* cross rs_thresh boundary is not allowed */ + num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh); + num = num >> 1; + ret = iavf_xmit_fixed_burst_vec_avx2_ctx(tx_queue, &tx_pkts[nb_tx], + num, offload); + nb_tx += ret; + nb_pkts -= ret; + if (ret < num) + break; + } + + return nb_tx; +} + +uint16_t +iavf_xmit_pkts_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return iavf_xmit_pkts_vec_avx2_ctx_cmn(tx_queue, tx_pkts, nb_pkts, false); +} + +uint16_t +iavf_xmit_pkts_vec_avx2_ctx_offload(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return iavf_xmit_pkts_vec_avx2_ctx_cmn(tx_queue, tx_pkts, nb_pkts, true); +} + static __rte_always_inline uint16_t iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v3 3/3] doc: announce change to LLDP packet detection in iavf PMD 2026-04-17 10:08 ` [PATCH v3 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-04-17 10:08 ` [PATCH v3 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus 2026-04-17 10:08 ` [PATCH v3 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus @ 2026-04-17 10:08 ` Ciara Loftus 2026-04-17 10:56 ` [PATCH v4 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 3 siblings, 0 replies; 25+ messages in thread From: Ciara Loftus @ 2026-04-17 10:08 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus The iavf PMD transmit paths currently use two methods to identify if an mbuf holds an LLDP packet: a dynamic mbuf field and the mbuf ptype. The ptype method is enabled via the enable_ptype_lldp devarg. A future release will remove the dynamic mbuf field approach at which point the ptype will be the only approach used. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- doc/guides/rel_notes/deprecation.rst | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 35c9b4e06c..17f90a6352 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -154,3 +154,7 @@ Deprecation Notices * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to DPDK. Those API functions are used internally by DPDK core and netvsc PMD. + +* net/iavf: The dynamic mbuf field used to detect LLDP packets on the + transmit path in the iavf PMD will be removed in a future release. + After removal, only packet type-based detection will be supported. -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v4 0/3] iavf: LLDP ptype and AVX2 ctx paths 2026-04-17 10:08 ` [PATCH v3 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus ` (2 preceding siblings ...) 2026-04-17 10:08 ` [PATCH v3 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus @ 2026-04-17 10:56 ` Ciara Loftus 2026-04-17 10:56 ` [PATCH v4 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus ` (2 more replies) 3 siblings, 3 replies; 25+ messages in thread From: Ciara Loftus @ 2026-04-17 10:56 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus This series adds another way to detect LLDP packets in the Tx paths of the iavf PMD, which is based on the mbuf packet type field. This is in addition to the existing method of using a dynamic field. The series also adds new AVX2 context descriptor Tx paths that support LLDP. Finally, a deprecation notice is added to the documentation to flag that the dynamic mbuf field method of LLDP packet detection will be removed in a future release. v4: * Fix build error by replacing commas with semicolons in patch 2 Ciara Loftus (3): net/iavf: support LLDP Tx via mbuf ptype or dynfield net/iavf: add AVX2 context descriptor Tx paths doc: announce change to LLDP packet detection in iavf PMD doc/guides/nics/intel_vf.rst | 21 +- doc/guides/rel_notes/deprecation.rst | 4 + doc/guides/rel_notes/release_26_07.rst | 6 + drivers/net/intel/iavf/iavf.h | 3 + drivers/net/intel/iavf/iavf_ethdev.c | 15 + drivers/net/intel/iavf/iavf_rxtx.c | 20 +- drivers/net/intel/iavf/iavf_rxtx.h | 13 +- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 386 ++++++++++++++++++++ 8 files changed, 458 insertions(+), 10 deletions(-) -- 2.43.0 ^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v4 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield 2026-04-17 10:56 ` [PATCH v4 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus @ 2026-04-17 10:56 ` Ciara Loftus 2026-04-17 13:00 ` Bruce Richardson 2026-04-17 10:56 ` [PATCH v4 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus 2026-04-17 10:56 ` [PATCH v4 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus 2 siblings, 1 reply; 25+ messages in thread From: Ciara Loftus @ 2026-04-17 10:56 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus Previously, the only way to transmit LLDP packets via the iavf PMD was to register the IAVF_TX_LLDP_DYNFIELD dynamic mbuf field and set it to a non-zero value on each LLDP mbuf before Tx. This patch adds an alternative. If the new devarg `enable_ptype_lldp` is set to 1, and if the mbuf packet type is set to RTE_PTYPE_L2_ETHER_LLDP then a Tx path with context descriptor support will be selected and any packets with the LLDP ptype will be transmitted. The dynamic mbuf field support is still present however it is intended that it will be removed in a future release, at which point only the packet type approach will be supported. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- doc/guides/nics/intel_vf.rst | 21 ++++++++++++++++----- doc/guides/rel_notes/release_26_07.rst | 5 +++++ drivers/net/intel/iavf/iavf.h | 1 + drivers/net/intel/iavf/iavf_ethdev.c | 15 +++++++++++++++ drivers/net/intel/iavf/iavf_rxtx.c | 2 +- drivers/net/intel/iavf/iavf_rxtx.h | 9 +++++---- 6 files changed, 43 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index 5fa2ddc9ea..4b65fa50cc 100644 --- a/doc/guides/nics/intel_vf.rst +++ b/doc/guides/nics/intel_vf.rst @@ -675,12 +675,14 @@ Inline IPsec Support Diagnostic Utilities -------------------- -Register mbuf dynfield to test Tx LLDP -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Tx LLDP Testing +~~~~~~~~~~~~~~~ -Register an mbuf dynfield ``IAVF_TX_LLDP_DYNFIELD`` on ``dev_start`` -to indicate the need to send LLDP packet. -This dynfield needs to be set to 1 when preparing packet. +There are two methods to trigger LLDP packet transmission from the VF. + +The first method is to register an mbuf dynfield ``IAVF_TX_LLDP_DYNFIELD`` before +``dev_start``. This dynfield needs to be set to 1 when preparing an LLDP packet +intended for transmission. For ``dpdk-testpmd`` application, it needs to stop and restart Tx port to take effect. @@ -688,6 +690,15 @@ Usage:: testpmd> set tx lldp on +An alternative method for transmitting LLDP packets is to set the ``packet_type`` of +the mbuf to ``RTE_PTYPE_L2_ETHER_LLDP``. This, in conjunction with enabling the +``enable_ptype_lldp`` devarg will cause such packets to be transmitted:: + + -a 0000:xx:xx.x,enable_ptype_lldp=1 + +When ``enable_ptype_lldp`` is not set (default), ptype-based LLDP detection is +disabled, but LLDP transmission via the dynamic mbuf field remains available. + Limitations or Knowing issues ----------------------------- diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index 060b26ff61..e0b27a554a 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -56,6 +56,11 @@ New Features ======================================================= +* **Updated Intel iavf driver.** + + * Added support for transmitting LLDP packets based on mbuf packet type. + + Removed Items ------------- diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index f8008d0fda..ef503a1b64 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -323,6 +323,7 @@ struct iavf_devargs { int auto_reconfig; int no_poll_on_link_down; uint64_t mbuf_check; + int enable_ptype_lldp; }; struct iavf_security_ctx; diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c index 3126d9b644..77e8c6b54b 100644 --- a/drivers/net/intel/iavf/iavf_ethdev.c +++ b/drivers/net/intel/iavf/iavf_ethdev.c @@ -44,9 +44,11 @@ #define IAVF_ENABLE_AUTO_RECONFIG_ARG "auto_reconfig" #define IAVF_NO_POLL_ON_LINK_DOWN_ARG "no-poll-on-link-down" #define IAVF_MBUF_CHECK_ARG "mbuf_check" +#define IAVF_ENABLE_PTYPE_LLDP_ARG "enable_ptype_lldp" uint64_t iavf_timestamp_dynflag; int iavf_timestamp_dynfield_offset = -1; int rte_pmd_iavf_tx_lldp_dynfield_offset = -1; +bool iavf_ptype_lldp_enabled; static const char * const iavf_valid_args[] = { IAVF_PROTO_XTR_ARG, @@ -56,6 +58,7 @@ static const char * const iavf_valid_args[] = { IAVF_ENABLE_AUTO_RECONFIG_ARG, IAVF_NO_POLL_ON_LINK_DOWN_ARG, IAVF_MBUF_CHECK_ARG, + IAVF_ENABLE_PTYPE_LLDP_ARG, NULL }; @@ -1016,6 +1019,11 @@ iavf_dev_start(struct rte_eth_dev *dev) /* Check Tx LLDP dynfield */ rte_pmd_iavf_tx_lldp_dynfield_offset = rte_mbuf_dynfield_lookup(IAVF_TX_LLDP_DYNFIELD, NULL); + if (rte_pmd_iavf_tx_lldp_dynfield_offset > 0) + PMD_DRV_LOG(WARNING, + "The LLDP Tx dynamic mbuf field will be removed in a future release."); + iavf_ptype_lldp_enabled = adapter->devargs.enable_ptype_lldp || + rte_pmd_iavf_tx_lldp_dynfield_offset > 0; if (iavf_init_queues(dev) != 0) { PMD_DRV_LOG(ERR, "failed to do Queue init"); @@ -2445,6 +2453,11 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev) if (ret) goto bail; + ret = rte_kvargs_process(kvlist, IAVF_ENABLE_PTYPE_LLDP_ARG, + &parse_bool, &ad->devargs.enable_ptype_lldp); + if (ret) + goto bail; + bail: rte_kvargs_free(kvlist); return ret; @@ -2795,6 +2808,8 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) */ rte_pmd_iavf_tx_lldp_dynfield_offset = rte_mbuf_dynfield_lookup(IAVF_TX_LLDP_DYNFIELD, NULL); + iavf_ptype_lldp_enabled = adapter->devargs.enable_ptype_lldp || + rte_pmd_iavf_tx_lldp_dynfield_offset > 0; iavf_set_tx_function(eth_dev); return 0; } diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index 4ff6c18dc4..bb62ab07de 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -3886,7 +3886,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) if (iavf_tx_vec_dev_check(dev) != -1) req_features.simd_width = iavf_get_max_simd_bitwidth(); - if (rte_pmd_iavf_tx_lldp_dynfield_offset > 0) + if (iavf_ptype_lldp_enabled) req_features.ctx_desc = true; for (i = 0; i < dev->data->nb_tx_queues; i++) { diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h index 80b06518b0..0157c4c37e 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.h +++ b/drivers/net/intel/iavf/iavf_rxtx.h @@ -157,14 +157,15 @@ #define IAVF_TX_LLDP_DYNFIELD "intel_pmd_dynfield_tx_lldp" #define IAVF_CHECK_TX_LLDP(m) \ - ((rte_pmd_iavf_tx_lldp_dynfield_offset > 0) && \ - (*RTE_MBUF_DYNFIELD((m), \ - rte_pmd_iavf_tx_lldp_dynfield_offset, \ - uint8_t *))) + (iavf_ptype_lldp_enabled && \ + (((m)->packet_type & RTE_PTYPE_L2_MASK) == RTE_PTYPE_L2_ETHER_LLDP || \ + (rte_pmd_iavf_tx_lldp_dynfield_offset > 0 && \ + *RTE_MBUF_DYNFIELD((m), rte_pmd_iavf_tx_lldp_dynfield_offset, uint8_t *)))) extern uint64_t iavf_timestamp_dynflag; extern int iavf_timestamp_dynfield_offset; extern int rte_pmd_iavf_tx_lldp_dynfield_offset; +extern bool iavf_ptype_lldp_enabled; typedef void (*iavf_rxd_to_pkt_fields_t)(struct ci_rx_queue *rxq, struct rte_mbuf *mb, -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v4 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield 2026-04-17 10:56 ` [PATCH v4 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus @ 2026-04-17 13:00 ` Bruce Richardson 0 siblings, 0 replies; 25+ messages in thread From: Bruce Richardson @ 2026-04-17 13:00 UTC (permalink / raw) To: Ciara Loftus; +Cc: dev On Fri, Apr 17, 2026 at 10:56:08AM +0000, Ciara Loftus wrote: > Previously, the only way to transmit LLDP packets via the iavf PMD > was to register the IAVF_TX_LLDP_DYNFIELD dynamic mbuf field and set > it to a non-zero value on each LLDP mbuf before Tx. > > This patch adds an alternative. If the new devarg `enable_ptype_lldp` is > set to 1, and if the mbuf packet type is set to RTE_PTYPE_L2_ETHER_LLDP > then a Tx path with context descriptor support will be selected and any > packets with the LLDP ptype will be transmitted. > > The dynamic mbuf field support is still present however it is intended > that it will be removed in a future release, at which point only the > packet type approach will be supported. > > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> > --- > doc/guides/nics/intel_vf.rst | 21 ++++++++++++++++----- > doc/guides/rel_notes/release_26_07.rst | 5 +++++ > drivers/net/intel/iavf/iavf.h | 1 + > drivers/net/intel/iavf/iavf_ethdev.c | 15 +++++++++++++++ > drivers/net/intel/iavf/iavf_rxtx.c | 2 +- > drivers/net/intel/iavf/iavf_rxtx.h | 9 +++++---- > 6 files changed, 43 insertions(+), 10 deletions(-) > Comments from v3 still apply, I believe. ^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v4 2/3] net/iavf: add AVX2 context descriptor Tx paths 2026-04-17 10:56 ` [PATCH v4 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-04-17 10:56 ` [PATCH v4 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus @ 2026-04-17 10:56 ` Ciara Loftus 2026-04-17 12:59 ` Bruce Richardson 2026-04-17 10:56 ` [PATCH v4 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus 2 siblings, 1 reply; 25+ messages in thread From: Ciara Loftus @ 2026-04-17 10:56 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus Prior to this commit the transmit paths implemented for AVX2 used the transmit descriptor only, making some offloads and features unavailable on the AVX2 path, like LLDP. Enable two new AVX2 transmit paths, both of which support using a context descriptor, one which performs offload and the other which does not. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- v4: * Use semicolons instead of commas to separate statements. --- doc/guides/rel_notes/release_26_07.rst | 1 + drivers/net/intel/iavf/iavf.h | 2 + drivers/net/intel/iavf/iavf_rxtx.c | 18 + drivers/net/intel/iavf/iavf_rxtx.h | 4 + drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 386 ++++++++++++++++++++ 5 files changed, 411 insertions(+) diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index e0b27a554a..6ecb0d4f38 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -59,6 +59,7 @@ New Features * **Updated Intel iavf driver.** * Added support for transmitting LLDP packets based on mbuf packet type. + * Implemented AVX2 context descriptor transmit paths. Removed Items diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index ef503a1b64..7c39d516a7 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -360,6 +360,8 @@ enum iavf_tx_func_type { IAVF_TX_DEFAULT, IAVF_TX_AVX2, IAVF_TX_AVX2_OFFLOAD, + IAVF_TX_AVX2_CTX, + IAVF_TX_AVX2_CTX_OFFLOAD, IAVF_TX_AVX512, IAVF_TX_AVX512_OFFLOAD, IAVF_TX_AVX512_CTX, diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index bb62ab07de..b0c9039bbf 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -3616,6 +3616,24 @@ static const struct ci_tx_path_info iavf_tx_path_infos[] = { .simd_width = RTE_VECT_SIMD_256 } }, + [IAVF_TX_AVX2_CTX] = { + .pkt_burst = iavf_xmit_pkts_vec_avx2_ctx, + .info = "Vector AVX2 Ctx", + .features = { + .tx_offloads = IAVF_TX_VECTOR_OFFLOADS, + .simd_width = RTE_VECT_SIMD_256, + .ctx_desc = true + } + }, + [IAVF_TX_AVX2_CTX_OFFLOAD] = { + .pkt_burst = iavf_xmit_pkts_vec_avx2_ctx_offload, + .info = "Vector AVX2 Ctx Offload", + .features = { + .tx_offloads = IAVF_TX_VECTOR_CTX_OFFLOAD_OFFLOADS, + .simd_width = RTE_VECT_SIMD_256, + .ctx_desc = true + } + }, #ifdef CC_AVX512_SUPPORT [IAVF_TX_AVX512] = { .pkt_burst = iavf_xmit_pkts_vec_avx512, diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h index 0157c4c37e..d4f0f4240c 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.h +++ b/drivers/net/intel/iavf/iavf_rxtx.h @@ -589,6 +589,10 @@ uint16_t iavf_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t iavf_xmit_pkts_vec_avx2_offload(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t iavf_xmit_pkts_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +uint16_t iavf_xmit_pkts_vec_avx2_ctx_offload(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); int iavf_rx_vec_dev_check(struct rte_eth_dev *dev); int iavf_tx_vec_dev_check(struct rte_eth_dev *dev); diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c index db0462f0f5..47bddcc4ca 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c @@ -1763,6 +1763,392 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_pkts; } +static inline void +iavf_fill_ctx_desc_tunneling_avx2(uint64_t *low_ctx_qw, struct rte_mbuf *pkt) +{ + if (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + uint64_t eip_typ = IAVF_TX_CTX_DESC_EIPT_NONE; + uint64_t eip_len = 0; + uint64_t eip_noinc = 0; + /* Default - IP_ID is increment in each segment of LSO */ + + switch (pkt->ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | + RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM)) { + case RTE_MBUF_F_TX_OUTER_IPV4: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_NO_CHECKSUM_OFFLOAD; + eip_len = pkt->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_CHECKSUM_OFFLOAD; + eip_len = pkt->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV6: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV6; + eip_len = pkt->outer_l3_len >> 2; + break; + } + + /* L4TUNT: L4 Tunneling Type */ + switch (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (pkt->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if ((eip_typ & (IAVF_TX_CTX_EXT_IP_IPV4 | + IAVF_TX_CTX_EXT_IP_IPV6 | + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM)) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) && + (pkt->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + + *low_ctx_qw = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | + eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | + eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT; + + } else { + *low_ctx_qw = 0; + } +} + +static inline void +iavf_fill_ctx_desc_tunneling_field(volatile uint64_t *qw0, + const struct rte_mbuf *m) +{ + uint64_t eip_typ = IAVF_TX_CTX_DESC_EIPT_NONE; + uint64_t eip_len = 0; + uint64_t eip_noinc = 0; + /* Default - IP_ID is increment in each segment of LSO */ + + switch (m->ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | + RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM)) { + case RTE_MBUF_F_TX_OUTER_IPV4: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_NO_CHECKSUM_OFFLOAD; + eip_len = m->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_CHECKSUM_OFFLOAD; + eip_len = m->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV6: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV6; + eip_len = m->outer_l3_len >> 2; + break; + } + + /* L4TUNT: L4 Tunneling Type */ + switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (m->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if ((eip_typ & (IAVF_TX_CTX_EXT_IP_IPV6 | + IAVF_TX_CTX_EXT_IP_IPV4 | + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM)) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) && + (m->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + + *qw0 = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | + eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | + eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT; +} + +static __rte_always_inline void +ctx_vtx1(volatile struct ci_tx_desc *txdp, struct rte_mbuf *pkt, + uint64_t flags, bool offload, uint8_t vlan_flag) +{ + uint64_t high_ctx_qw = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t low_ctx_qw = 0; + + if (offload) { + iavf_fill_ctx_desc_tunneling_avx2(&low_ctx_qw, pkt); +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (pkt->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt->vlan_tci_outer : + (uint64_t)pkt->vlan_tci; + high_ctx_qw |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + high_ctx_qw |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw |= (uint64_t)pkt->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } +#endif + } + if (IAVF_CHECK_TX_LLDP(pkt)) + high_ctx_qw |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + uint64_t high_data_qw = (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT) | + ((uint64_t)pkt->data_len << IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)); + if (offload) + iavf_txd_enable_offload(pkt, &high_data_qw, vlan_flag); + + __m256i ctx_data_desc = _mm256_set_epi64x(high_data_qw, pkt->buf_iova + pkt->data_off, + high_ctx_qw, low_ctx_qw); + + _mm256_storeu_si256(RTE_CAST_PTR(__m256i *, txdp), ctx_data_desc); +} + +static __rte_always_inline void +ctx_vtx(volatile struct ci_tx_desc *txdp, + struct rte_mbuf **pkt, uint16_t nb_pkts, uint64_t flags, + bool offload, uint8_t vlan_flag) +{ + uint64_t hi_data_qw_tmpl = (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT)); + + /* if unaligned on 32-bit boundary, do one to align */ + if (((uintptr_t)txdp & 0x1F) != 0 && nb_pkts != 0) { + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag); + nb_pkts--; txdp++; pkt++; + } + + for (; nb_pkts > 1; txdp += 4, pkt += 2, nb_pkts -= 2) { + uint64_t hi_ctx_qw1 = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t hi_ctx_qw0 = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t low_ctx_qw1 = 0; + uint64_t low_ctx_qw0 = 0; + uint64_t hi_data_qw1 = 0; + uint64_t hi_data_qw0 = 0; + + hi_data_qw1 = hi_data_qw_tmpl | + ((uint64_t)pkt[1]->data_len << + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT); + hi_data_qw0 = hi_data_qw_tmpl | + ((uint64_t)pkt[0]->data_len << + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT); + +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (offload) { + if (pkt[1]->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt[1]->vlan_tci_outer : + (uint64_t)pkt[1]->vlan_tci; + hi_ctx_qw1 |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw1 |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if (pkt[1]->ol_flags & RTE_MBUF_F_TX_VLAN && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + hi_ctx_qw1 |= + IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw1 |= + (uint64_t)pkt[1]->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } + } +#endif + if (IAVF_CHECK_TX_LLDP(pkt[1])) + hi_ctx_qw1 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (offload) { + if (pkt[0]->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt[0]->vlan_tci_outer : + (uint64_t)pkt[0]->vlan_tci; + hi_ctx_qw0 |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw0 |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if (pkt[0]->ol_flags & RTE_MBUF_F_TX_VLAN && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + hi_ctx_qw0 |= + IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw0 |= + (uint64_t)pkt[0]->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } + } +#endif + if (IAVF_CHECK_TX_LLDP(pkt[0])) + hi_ctx_qw0 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + + if (offload) { + iavf_txd_enable_offload(pkt[1], &hi_data_qw1, vlan_flag); + iavf_txd_enable_offload(pkt[0], &hi_data_qw0, vlan_flag); + iavf_fill_ctx_desc_tunneling_field(&low_ctx_qw1, pkt[1]); + iavf_fill_ctx_desc_tunneling_field(&low_ctx_qw0, pkt[0]); + } + + __m256i desc2_3 = + _mm256_set_epi64x + (hi_data_qw1, pkt[1]->buf_iova + pkt[1]->data_off, + hi_ctx_qw1, low_ctx_qw1); + __m256i desc0_1 = + _mm256_set_epi64x + (hi_data_qw0, pkt[0]->buf_iova + pkt[0]->data_off, + hi_ctx_qw0, low_ctx_qw0); + _mm256_store_si256(RTE_CAST_PTR(__m256i *, txdp + 2), desc2_3); + _mm256_store_si256(RTE_CAST_PTR(__m256i *, txdp), desc0_1); + } + + if (nb_pkts) + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag); +} + +static __rte_always_inline uint16_t +iavf_xmit_fixed_burst_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts, bool offload) +{ + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; + volatile struct ci_tx_desc *txdp; + struct ci_tx_entry_vec *txep; + uint16_t n, nb_commit, nb_mbuf, tx_id; + /* bit2 is reserved and must be set to 1 according to Spec */ + uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; + uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; + + if (txq->nb_tx_free < txq->tx_free_thresh) + ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true); + + nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1); + nb_commit &= 0xFFFE; + if (unlikely(nb_commit == 0)) + return 0; + + nb_pkts = nb_commit >> 1; + tx_id = txq->tx_tail; + txdp = &txq->ci_tx_ring[tx_id]; + txep = (void *)txq->sw_ring; + txep += (tx_id >> 1); + + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit); + n = (uint16_t)(txq->nb_tx_desc - tx_id); + + if (n != 0 && nb_commit >= n) { + nb_mbuf = n >> 1; + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_mbuf); + + ctx_vtx(txdp, tx_pkts, nb_mbuf - 1, flags, offload, txq->vlan_flag); + tx_pkts += (nb_mbuf - 1); + txdp += (n - 2); + ctx_vtx1(txdp, *tx_pkts++, rs, offload, txq->vlan_flag); + + nb_commit = (uint16_t)(nb_commit - n); + + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); + tx_id = 0; + /* avoid reach the end of ring */ + txdp = txq->ci_tx_ring; + txep = (void *)txq->sw_ring; + } + + nb_mbuf = nb_commit >> 1; + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_mbuf); + + ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag); + tx_id = (uint16_t)(tx_id + nb_commit); + + if (tx_id > txq->tx_next_rs) { + txq->ci_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << + IAVF_TXD_QW1_CMD_SHIFT); + txq->tx_next_rs = + (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); + } + + txq->tx_tail = tx_id; + + IAVF_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail); + return nb_pkts; +} + +static __rte_always_inline uint16_t +iavf_xmit_pkts_vec_avx2_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts, bool offload) +{ + uint16_t nb_tx = 0; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; + + while (nb_pkts) { + uint16_t ret, num; + + /* cross rs_thresh boundary is not allowed */ + num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh); + num = num >> 1; + ret = iavf_xmit_fixed_burst_vec_avx2_ctx(tx_queue, &tx_pkts[nb_tx], + num, offload); + nb_tx += ret; + nb_pkts -= ret; + if (ret < num) + break; + } + + return nb_tx; +} + +uint16_t +iavf_xmit_pkts_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return iavf_xmit_pkts_vec_avx2_ctx_cmn(tx_queue, tx_pkts, nb_pkts, false); +} + +uint16_t +iavf_xmit_pkts_vec_avx2_ctx_offload(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return iavf_xmit_pkts_vec_avx2_ctx_cmn(tx_queue, tx_pkts, nb_pkts, true); +} + static __rte_always_inline uint16_t iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v4 2/3] net/iavf: add AVX2 context descriptor Tx paths 2026-04-17 10:56 ` [PATCH v4 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus @ 2026-04-17 12:59 ` Bruce Richardson 0 siblings, 0 replies; 25+ messages in thread From: Bruce Richardson @ 2026-04-17 12:59 UTC (permalink / raw) To: Ciara Loftus; +Cc: dev On Fri, Apr 17, 2026 at 10:56:09AM +0000, Ciara Loftus wrote: > Prior to this commit the transmit paths implemented for AVX2 used the > transmit descriptor only, making some offloads and features unavailable > on the AVX2 path, like LLDP. Enable two new AVX2 transmit paths, both > of which support using a context descriptor, one which performs offload > and the other which does not. > > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Acked-by: Bruce Richardson <bruce.richardson@intel.com> ^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v4 3/3] doc: announce change to LLDP packet detection in iavf PMD 2026-04-17 10:56 ` [PATCH v4 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-04-17 10:56 ` [PATCH v4 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus 2026-04-17 10:56 ` [PATCH v4 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus @ 2026-04-17 10:56 ` Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2 siblings, 1 reply; 25+ messages in thread From: Ciara Loftus @ 2026-04-17 10:56 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus The iavf PMD transmit paths currently use two methods to identify if an mbuf holds an LLDP packet: a dynamic mbuf field and the mbuf ptype. The ptype method is enabled via the enable_ptype_lldp devarg. A future release will remove the dynamic mbuf field approach at which point the ptype will be the only approach used. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- doc/guides/rel_notes/deprecation.rst | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 35c9b4e06c..17f90a6352 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -154,3 +154,7 @@ Deprecation Notices * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to DPDK. Those API functions are used internally by DPDK core and netvsc PMD. + +* net/iavf: The dynamic mbuf field used to detect LLDP packets on the + transmit path in the iavf PMD will be removed in a future release. + After removal, only packet type-based detection will be supported. -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v5 0/3] iavf: LLDP ptype and AVX2 ctx paths 2026-04-17 10:56 ` [PATCH v4 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus @ 2026-04-22 9:16 ` Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus ` (2 more replies) 0 siblings, 3 replies; 25+ messages in thread From: Ciara Loftus @ 2026-04-22 9:16 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus This series adds another way to detect LLDP packets in the Tx paths of the iavf PMD, which is based on the mbuf packet type field. This is in addition to the existing method of using a dynamic field. The series also adds new AVX2 context descriptor Tx paths that support LLDP. Finally, a deprecation notice is added to the documentation to flag that the dynamic mbuf field method of LLDP packet detection will be removed in a future release. Patch 1 flags "CHECK:MACRO_ARG_REUSE" in checkpatch for the IAVF_CHECK_TX_LLDP macro. This is a false positive because the inputs to the macro are not reused in a way that would cause issues. v5: * Updated the documentation * Made the "lldp mode" per-device rather than a global value. This is necessary because the devarg is per-device unlike the global dynfield. The mode is added to the iavf-specific flags in the ci_tx_queue struct so that it can be read for each device on the datapath. Ciara Loftus (3): net/iavf: support LLDP Tx via mbuf ptype or dynfield net/iavf: add AVX2 context descriptor Tx paths doc: announce change to LLDP packet detection in iavf PMD doc/guides/nics/intel_vf.rst | 33 +- doc/guides/rel_notes/deprecation.rst | 4 + doc/guides/rel_notes/release_26_07.rst | 6 + drivers/net/intel/common/tx.h | 1 + drivers/net/intel/iavf/iavf.h | 3 + drivers/net/intel/iavf/iavf_ethdev.c | 27 ++ drivers/net/intel/iavf/iavf_rxtx.c | 33 +- drivers/net/intel/iavf/iavf_rxtx.h | 21 +- drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 387 ++++++++++++++++++ drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 21 +- 10 files changed, 508 insertions(+), 28 deletions(-) -- 2.43.0 ^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v5 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield 2026-04-22 9:16 ` [PATCH v5 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus @ 2026-04-22 9:16 ` Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus 2 siblings, 0 replies; 25+ messages in thread From: Ciara Loftus @ 2026-04-22 9:16 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus Previously, the only way to transmit LLDP packets via the iavf PMD was to register the IAVF_TX_LLDP_DYNFIELD dynamic mbuf field and set it to a non-zero value on each LLDP mbuf before Tx. This patch adds an alternative. If the new devarg `enable_ptype_lldp` is set to 1, and if the mbuf packet type is set to RTE_PTYPE_L2_ETHER_LLDP then a Tx path with context descriptor support will be selected and any packets with the LLDP ptype will be transmitted. The dynamic mbuf field support is still present however it is intended that it will be removed in a future release, at which point only the packet type approach will be supported. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- doc/guides/nics/intel_vf.rst | 33 +++++++++++++++---- doc/guides/rel_notes/release_26_07.rst | 5 +++ drivers/net/intel/common/tx.h | 1 + drivers/net/intel/iavf/iavf.h | 1 + drivers/net/intel/iavf/iavf_ethdev.c | 27 +++++++++++++++ drivers/net/intel/iavf/iavf_rxtx.c | 15 +++++---- drivers/net/intel/iavf/iavf_rxtx.h | 17 +++++++--- drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c | 21 ++++++------ 8 files changed, 92 insertions(+), 28 deletions(-) diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index 5fa2ddc9ea..3845b69408 100644 --- a/doc/guides/nics/intel_vf.rst +++ b/doc/guides/nics/intel_vf.rst @@ -675,19 +675,40 @@ Inline IPsec Support Diagnostic Utilities -------------------- -Register mbuf dynfield to test Tx LLDP -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Tx LLDP Testing +~~~~~~~~~~~~~~~ -Register an mbuf dynfield ``IAVF_TX_LLDP_DYNFIELD`` on ``dev_start`` -to indicate the need to send LLDP packet. -This dynfield needs to be set to 1 when preparing packet. +There are two methods to trigger LLDP packet transmission from the VF. -For ``dpdk-testpmd`` application, it needs to stop and restart Tx port to take effect. +The first (and recommended) method is to set the ``packet_type`` of the mbuf to +``RTE_PTYPE_L2_ETHER_LLDP``. This, in conjunction with enabling the +``enable_ptype_lldp`` devarg will cause such packets to be transmitted:: + + -a 0000:xx:xx.x,enable_ptype_lldp=1 + +An alternative method is to register an mbuf dynfield ``IAVF_TX_LLDP_DYNFIELD`` +before ``dev_start``. This dynfield needs to be set to 1 when preparing an LLDP +packet intended for transmission. + +.. note:: + + The dynamic mbuf field method is deprecated and will be removed in a future + release. Users should migrate to the ``enable_ptype_lldp`` devarg and mbuf + LLDP ptype method described above. + +For ``dpdk-testpmd`` application, the dynamic mbuf field is registered when the +following command is issued: Usage:: testpmd> set tx lldp on +One must then stop and restart the port for it to take effect. +These requirements only apply for the dynamic mbuf field method; no special +steps are needed for the ``enable_ptype_lldp`` devarg method. +If both methods are enabled, the ptype based method will take precedence over the +dynamic mbuf field method. + Limitations or Knowing issues ----------------------------- diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index 060b26ff61..e0b27a554a 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -56,6 +56,11 @@ New Features ======================================================= +* **Updated Intel iavf driver.** + + * Added support for transmitting LLDP packets based on mbuf packet type. + + Removed Items ------------- diff --git a/drivers/net/intel/common/tx.h b/drivers/net/intel/common/tx.h index c179247e24..8a625ea8aa 100644 --- a/drivers/net/intel/common/tx.h +++ b/drivers/net/intel/common/tx.h @@ -196,6 +196,7 @@ struct ci_tx_queue { uint8_t vlan_flag; uint8_t tc; bool use_ctx; /* with ctx info, each pkt needs two descriptors */ + uint8_t lldp_mode; /* ptype or dynfield */ }; struct { /* ixgbe specific values */ const struct ixgbe_txq_ops *ops; diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index f8008d0fda..ef503a1b64 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -323,6 +323,7 @@ struct iavf_devargs { int auto_reconfig; int no_poll_on_link_down; uint64_t mbuf_check; + int enable_ptype_lldp; }; struct iavf_security_ctx; diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c index 3126d9b644..288d701b7e 100644 --- a/drivers/net/intel/iavf/iavf_ethdev.c +++ b/drivers/net/intel/iavf/iavf_ethdev.c @@ -44,6 +44,7 @@ #define IAVF_ENABLE_AUTO_RECONFIG_ARG "auto_reconfig" #define IAVF_NO_POLL_ON_LINK_DOWN_ARG "no-poll-on-link-down" #define IAVF_MBUF_CHECK_ARG "mbuf_check" +#define IAVF_ENABLE_PTYPE_LLDP_ARG "enable_ptype_lldp" uint64_t iavf_timestamp_dynflag; int iavf_timestamp_dynfield_offset = -1; int rte_pmd_iavf_tx_lldp_dynfield_offset = -1; @@ -56,6 +57,7 @@ static const char * const iavf_valid_args[] = { IAVF_ENABLE_AUTO_RECONFIG_ARG, IAVF_NO_POLL_ON_LINK_DOWN_ARG, IAVF_MBUF_CHECK_ARG, + IAVF_ENABLE_PTYPE_LLDP_ARG, NULL }; @@ -1016,6 +1018,26 @@ iavf_dev_start(struct rte_eth_dev *dev) /* Check Tx LLDP dynfield */ rte_pmd_iavf_tx_lldp_dynfield_offset = rte_mbuf_dynfield_lookup(IAVF_TX_LLDP_DYNFIELD, NULL); + if (rte_pmd_iavf_tx_lldp_dynfield_offset > 0) { + PMD_DRV_LOG(WARNING, + "Using a dynamic mbuf field to identify LLDP packets is deprecated. " + "Set the 'enable_ptype_lldp' driver option and mbuf LLDP ptypes instead."); + if (adapter->devargs.enable_ptype_lldp) + PMD_DRV_LOG(WARNING, + "Both ptype and dynfield LLDP enabled; ptype takes precedence."); + } + + for (uint16_t i = 0; i < dev->data->nb_tx_queues; i++) { + struct ci_tx_queue *txq = dev->data->tx_queues[i]; + if (txq) { + if (adapter->devargs.enable_ptype_lldp) + txq->lldp_mode = IAVF_LLDP_PTYPE; + else if (rte_pmd_iavf_tx_lldp_dynfield_offset > 0) + txq->lldp_mode = IAVF_LLDP_DYNFIELD; + else + txq->lldp_mode = IAVF_LLDP_DISABLED; + } + } if (iavf_init_queues(dev) != 0) { PMD_DRV_LOG(ERR, "failed to do Queue init"); @@ -2445,6 +2467,11 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev) if (ret) goto bail; + ret = rte_kvargs_process(kvlist, IAVF_ENABLE_PTYPE_LLDP_ARG, + &parse_bool, &ad->devargs.enable_ptype_lldp); + if (ret) + goto bail; + bail: rte_kvargs_free(kvlist); return ret; diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index 3d9b49efd0..4828655ea7 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -2343,7 +2343,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue, /* Check if the context descriptor is needed for TX offloading */ static inline uint16_t -iavf_calc_context_desc(const struct rte_mbuf *mb, uint8_t vlan_flag) +iavf_calc_context_desc(const struct rte_mbuf *mb, uint8_t vlan_flag, uint8_t lldp_mode) { uint64_t flags = mb->ol_flags; if (flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG | @@ -2354,7 +2354,7 @@ iavf_calc_context_desc(const struct rte_mbuf *mb, uint8_t vlan_flag) vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) return 1; - if (IAVF_CHECK_TX_LLDP(mb)) + if (IAVF_CHECK_TX_LLDP(mb, lldp_mode)) return 1; return 0; @@ -2542,17 +2542,18 @@ iavf_get_context_desc(uint64_t ol_flags, const struct rte_mbuf *mbuf, const struct ci_tx_queue *txq, uint64_t *qw0, uint64_t *qw1) { - uint8_t iavf_vlan_flag; + uint8_t iavf_vlan_flag, lldp_mode; uint16_t cd_l2tag2 = 0; uint64_t cd_type_cmd = IAVF_TX_DESC_DTYPE_CONTEXT; uint64_t cd_tunneling_params = 0; struct iavf_ipsec_crypto_pkt_metadata *ipsec_md = NULL; - /* Use IAVF-specific vlan_flag from txq */ + /* Use IAVF-specific flags from txq */ iavf_vlan_flag = txq->vlan_flag; + lldp_mode = txq->lldp_mode; /* Check if context descriptor is needed using existing IAVF logic */ - if (!iavf_calc_context_desc(mbuf, iavf_vlan_flag)) + if (!iavf_calc_context_desc(mbuf, iavf_vlan_flag, lldp_mode)) return 0; /* Get IPsec metadata if needed */ @@ -2584,7 +2585,7 @@ iavf_get_context_desc(uint64_t ol_flags, const struct rte_mbuf *mbuf, } /* LLDP switching field */ - if (IAVF_CHECK_TX_LLDP(mbuf)) + if (IAVF_CHECK_TX_LLDP(mbuf, lldp_mode)) cd_type_cmd |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; /* Tunneling field */ @@ -3886,7 +3887,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) if (iavf_tx_vec_dev_check(dev) != -1) req_features.simd_width = iavf_get_max_simd_bitwidth(); - if (rte_pmd_iavf_tx_lldp_dynfield_offset > 0) + if (adapter->devargs.enable_ptype_lldp || rte_pmd_iavf_tx_lldp_dynfield_offset > 0) req_features.ctx_desc = true; for (i = 0; i < dev->data->nb_tx_queues; i++) { diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h index 80b06518b0..054cffd60c 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.h +++ b/drivers/net/intel/iavf/iavf_rxtx.h @@ -156,11 +156,18 @@ (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK) #define IAVF_TX_LLDP_DYNFIELD "intel_pmd_dynfield_tx_lldp" -#define IAVF_CHECK_TX_LLDP(m) \ - ((rte_pmd_iavf_tx_lldp_dynfield_offset > 0) && \ - (*RTE_MBUF_DYNFIELD((m), \ - rte_pmd_iavf_tx_lldp_dynfield_offset, \ - uint8_t *))) + +/* LLDP Tx modes */ +#define IAVF_LLDP_DISABLED 0 +#define IAVF_LLDP_PTYPE 1 +#define IAVF_LLDP_DYNFIELD 2 + +#define IAVF_CHECK_TX_LLDP(m, lldp_mode) \ + ((lldp_mode) && \ + ((((lldp_mode) == IAVF_LLDP_PTYPE) && \ + ((m)->packet_type & RTE_PTYPE_L2_MASK) == RTE_PTYPE_L2_ETHER_LLDP) || \ + (((lldp_mode) == IAVF_LLDP_DYNFIELD) && \ + *RTE_MBUF_DYNFIELD((m), rte_pmd_iavf_tx_lldp_dynfield_offset, uint8_t *)))) extern uint64_t iavf_timestamp_dynflag; extern int iavf_timestamp_dynfield_offset; diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c index 4e8bf94fa0..c9422971b7 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx512.c @@ -2059,7 +2059,7 @@ iavf_fill_ctx_desc_tunnelling_field(volatile uint64_t *qw0, static __rte_always_inline void ctx_vtx1(volatile struct ci_tx_desc *txdp, struct rte_mbuf *pkt, - uint64_t flags, bool offload, uint8_t vlan_flag) + uint64_t flags, bool offload, uint8_t vlan_flag, uint8_t lldp_mode) { uint64_t high_ctx_qw = IAVF_TX_DESC_DTYPE_CONTEXT; uint64_t low_ctx_qw = 0; @@ -2080,7 +2080,7 @@ ctx_vtx1(volatile struct ci_tx_desc *txdp, struct rte_mbuf *pkt, } #endif } - if (IAVF_CHECK_TX_LLDP(pkt)) + if (IAVF_CHECK_TX_LLDP(pkt, lldp_mode)) high_ctx_qw |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; uint64_t high_data_qw = (CI_TX_DESC_DTYPE_DATA | @@ -2098,13 +2098,13 @@ ctx_vtx1(volatile struct ci_tx_desc *txdp, struct rte_mbuf *pkt, static __rte_always_inline void ctx_vtx(volatile struct ci_tx_desc *txdp, struct rte_mbuf **pkt, uint16_t nb_pkts, uint64_t flags, - bool offload, uint8_t vlan_flag) + bool offload, uint8_t vlan_flag, uint8_t lldp_mode) { uint64_t hi_data_qw_tmpl = (CI_TX_DESC_DTYPE_DATA | (flags << CI_TXD_QW1_CMD_S)); /* if unaligned on 32-bit boundary, do one to align */ if (((uintptr_t)txdp & 0x1F) != 0 && nb_pkts != 0) { - ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag); + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag, lldp_mode); nb_pkts--; txdp++; pkt++; } @@ -2137,7 +2137,7 @@ ctx_vtx(volatile struct ci_tx_desc *txdp, } } #endif - if (IAVF_CHECK_TX_LLDP(pkt[1])) + if (IAVF_CHECK_TX_LLDP(pkt[1], lldp_mode)) hi_ctx_qw1 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << CI_TXD_QW1_CMD_S; @@ -2157,7 +2157,7 @@ ctx_vtx(volatile struct ci_tx_desc *txdp, } } #endif - if (IAVF_CHECK_TX_LLDP(pkt[0])) + if (IAVF_CHECK_TX_LLDP(pkt[0], lldp_mode)) hi_ctx_qw0 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << CI_TXD_QW1_CMD_S; if (offload) { @@ -2177,7 +2177,7 @@ ctx_vtx(volatile struct ci_tx_desc *txdp, } if (nb_pkts) - ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag); + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag, lldp_mode); } static __rte_always_inline uint16_t @@ -2258,6 +2258,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, /* bit2 is reserved and must be set to 1 according to Spec */ uint64_t flags = CI_TX_DESC_CMD_EOP | CI_TX_DESC_CMD_ICRC; uint64_t rs = CI_TX_DESC_CMD_RS | flags; + uint8_t lldp_mode = txq->lldp_mode; if (txq->nb_tx_free < txq->tx_free_thresh) ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true); @@ -2280,10 +2281,10 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, nb_mbuf = n >> 1; tx_backlog_entry_avx512(txep, tx_pkts, nb_mbuf); - ctx_vtx(txdp, tx_pkts, nb_mbuf - 1, flags, offload, txq->vlan_flag); + ctx_vtx(txdp, tx_pkts, nb_mbuf - 1, flags, offload, txq->vlan_flag, lldp_mode); tx_pkts += (nb_mbuf - 1); txdp += (n - 2); - ctx_vtx1(txdp, *tx_pkts++, rs, offload, txq->vlan_flag); + ctx_vtx1(txdp, *tx_pkts++, rs, offload, txq->vlan_flag, lldp_mode); nb_commit = (uint16_t)(nb_commit - n); @@ -2297,7 +2298,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, nb_mbuf = nb_commit >> 1; tx_backlog_entry_avx512(txep, tx_pkts, nb_mbuf); - ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag); + ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag, lldp_mode); tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v5 2/3] net/iavf: add AVX2 context descriptor Tx paths 2026-04-22 9:16 ` [PATCH v5 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus @ 2026-04-22 9:16 ` Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus 2 siblings, 0 replies; 25+ messages in thread From: Ciara Loftus @ 2026-04-22 9:16 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus, Bruce Richardson Prior to this commit the transmit paths implemented for AVX2 used the transmit descriptor only, making some offloads and features unavailable on the AVX2 path, like LLDP. Enable two new AVX2 transmit paths, both of which support using a context descriptor, one which performs offload and the other which does not. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Acked-by: Bruce Richardson <bruce.richardson@intel.com> --- doc/guides/rel_notes/release_26_07.rst | 1 + drivers/net/intel/iavf/iavf.h | 2 + drivers/net/intel/iavf/iavf_rxtx.c | 18 + drivers/net/intel/iavf/iavf_rxtx.h | 4 + drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c | 387 ++++++++++++++++++++ 5 files changed, 412 insertions(+) diff --git a/doc/guides/rel_notes/release_26_07.rst b/doc/guides/rel_notes/release_26_07.rst index e0b27a554a..6ecb0d4f38 100644 --- a/doc/guides/rel_notes/release_26_07.rst +++ b/doc/guides/rel_notes/release_26_07.rst @@ -59,6 +59,7 @@ New Features * **Updated Intel iavf driver.** * Added support for transmitting LLDP packets based on mbuf packet type. + * Implemented AVX2 context descriptor transmit paths. Removed Items diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h index ef503a1b64..7c39d516a7 100644 --- a/drivers/net/intel/iavf/iavf.h +++ b/drivers/net/intel/iavf/iavf.h @@ -360,6 +360,8 @@ enum iavf_tx_func_type { IAVF_TX_DEFAULT, IAVF_TX_AVX2, IAVF_TX_AVX2_OFFLOAD, + IAVF_TX_AVX2_CTX, + IAVF_TX_AVX2_CTX_OFFLOAD, IAVF_TX_AVX512, IAVF_TX_AVX512_OFFLOAD, IAVF_TX_AVX512_CTX, diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c index 4828655ea7..b90b14a8e4 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.c +++ b/drivers/net/intel/iavf/iavf_rxtx.c @@ -3617,6 +3617,24 @@ static const struct ci_tx_path_info iavf_tx_path_infos[] = { .simd_width = RTE_VECT_SIMD_256 } }, + [IAVF_TX_AVX2_CTX] = { + .pkt_burst = iavf_xmit_pkts_vec_avx2_ctx, + .info = "Vector AVX2 Ctx", + .features = { + .tx_offloads = IAVF_TX_VECTOR_OFFLOADS, + .simd_width = RTE_VECT_SIMD_256, + .ctx_desc = true + } + }, + [IAVF_TX_AVX2_CTX_OFFLOAD] = { + .pkt_burst = iavf_xmit_pkts_vec_avx2_ctx_offload, + .info = "Vector AVX2 Ctx Offload", + .features = { + .tx_offloads = IAVF_TX_VECTOR_CTX_OFFLOAD_OFFLOADS, + .simd_width = RTE_VECT_SIMD_256, + .ctx_desc = true + } + }, #ifdef CC_AVX512_SUPPORT [IAVF_TX_AVX512] = { .pkt_burst = iavf_xmit_pkts_vec_avx512, diff --git a/drivers/net/intel/iavf/iavf_rxtx.h b/drivers/net/intel/iavf/iavf_rxtx.h index 054cffd60c..7838c17e89 100644 --- a/drivers/net/intel/iavf/iavf_rxtx.h +++ b/drivers/net/intel/iavf/iavf_rxtx.h @@ -595,6 +595,10 @@ uint16_t iavf_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t iavf_xmit_pkts_vec_avx2_offload(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t iavf_xmit_pkts_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +uint16_t iavf_xmit_pkts_vec_avx2_ctx_offload(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); int iavf_rx_vec_dev_check(struct rte_eth_dev *dev); int iavf_tx_vec_dev_check(struct rte_eth_dev *dev); diff --git a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c index db0462f0f5..aa60e71857 100644 --- a/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/intel/iavf/iavf_rxtx_vec_avx2.c @@ -1763,6 +1763,393 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_pkts; } +static inline void +iavf_fill_ctx_desc_tunneling_avx2(uint64_t *low_ctx_qw, struct rte_mbuf *pkt) +{ + if (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + uint64_t eip_typ = IAVF_TX_CTX_DESC_EIPT_NONE; + uint64_t eip_len = 0; + uint64_t eip_noinc = 0; + /* Default - IP_ID is increment in each segment of LSO */ + + switch (pkt->ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | + RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM)) { + case RTE_MBUF_F_TX_OUTER_IPV4: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_NO_CHECKSUM_OFFLOAD; + eip_len = pkt->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_CHECKSUM_OFFLOAD; + eip_len = pkt->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV6: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV6; + eip_len = pkt->outer_l3_len >> 2; + break; + } + + /* L4TUNT: L4 Tunneling Type */ + switch (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (pkt->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if ((eip_typ & (IAVF_TX_CTX_EXT_IP_IPV4 | + IAVF_TX_CTX_EXT_IP_IPV6 | + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM)) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) && + (pkt->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + + *low_ctx_qw = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | + eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | + eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT; + + } else { + *low_ctx_qw = 0; + } +} + +static inline void +iavf_fill_ctx_desc_tunneling_field(volatile uint64_t *qw0, + const struct rte_mbuf *m) +{ + uint64_t eip_typ = IAVF_TX_CTX_DESC_EIPT_NONE; + uint64_t eip_len = 0; + uint64_t eip_noinc = 0; + /* Default - IP_ID is increment in each segment of LSO */ + + switch (m->ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | + RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM)) { + case RTE_MBUF_F_TX_OUTER_IPV4: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_NO_CHECKSUM_OFFLOAD; + eip_len = m->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV4_CHECKSUM_OFFLOAD; + eip_len = m->outer_l3_len >> 2; + break; + case RTE_MBUF_F_TX_OUTER_IPV6: + eip_typ = IAVF_TX_CTX_DESC_EIPT_IPV6; + eip_len = m->outer_l3_len >> 2; + break; + } + + /* L4TUNT: L4 Tunneling Type */ + switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (m->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if ((eip_typ & (IAVF_TX_CTX_EXT_IP_IPV6 | + IAVF_TX_CTX_EXT_IP_IPV4 | + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM)) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING) && + (m->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + + *qw0 = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | + eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | + eip_noinc << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIP_NOINC_SHIFT; +} + +static __rte_always_inline void +ctx_vtx1(volatile struct ci_tx_desc *txdp, struct rte_mbuf *pkt, + uint64_t flags, bool offload, uint8_t vlan_flag, uint8_t lldp_mode) +{ + uint64_t high_ctx_qw = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t low_ctx_qw = 0; + + if (offload) { + iavf_fill_ctx_desc_tunneling_avx2(&low_ctx_qw, pkt); +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (pkt->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt->vlan_tci_outer : + (uint64_t)pkt->vlan_tci; + high_ctx_qw |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + high_ctx_qw |= IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw |= (uint64_t)pkt->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } +#endif + } + if (IAVF_CHECK_TX_LLDP(pkt, lldp_mode)) + high_ctx_qw |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + uint64_t high_data_qw = (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT) | + ((uint64_t)pkt->data_len << IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)); + if (offload) + iavf_txd_enable_offload(pkt, &high_data_qw, vlan_flag); + + __m256i ctx_data_desc = _mm256_set_epi64x(high_data_qw, pkt->buf_iova + pkt->data_off, + high_ctx_qw, low_ctx_qw); + + _mm256_storeu_si256(RTE_CAST_PTR(__m256i *, txdp), ctx_data_desc); +} + +static __rte_always_inline void +ctx_vtx(volatile struct ci_tx_desc *txdp, + struct rte_mbuf **pkt, uint16_t nb_pkts, uint64_t flags, + bool offload, uint8_t vlan_flag, uint8_t lldp_mode) +{ + uint64_t hi_data_qw_tmpl = (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT)); + + /* if unaligned on 32-bit boundary, do one to align */ + if (((uintptr_t)txdp & 0x1F) != 0 && nb_pkts != 0) { + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag, lldp_mode); + nb_pkts--; txdp++; pkt++; + } + + for (; nb_pkts > 1; txdp += 4, pkt += 2, nb_pkts -= 2) { + uint64_t hi_ctx_qw1 = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t hi_ctx_qw0 = IAVF_TX_DESC_DTYPE_CONTEXT; + uint64_t low_ctx_qw1 = 0; + uint64_t low_ctx_qw0 = 0; + uint64_t hi_data_qw1 = 0; + uint64_t hi_data_qw0 = 0; + + hi_data_qw1 = hi_data_qw_tmpl | + ((uint64_t)pkt[1]->data_len << + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT); + hi_data_qw0 = hi_data_qw_tmpl | + ((uint64_t)pkt[0]->data_len << + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT); + +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (offload) { + if (pkt[1]->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt[1]->vlan_tci_outer : + (uint64_t)pkt[1]->vlan_tci; + hi_ctx_qw1 |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw1 |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if (pkt[1]->ol_flags & RTE_MBUF_F_TX_VLAN && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + hi_ctx_qw1 |= + IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw1 |= + (uint64_t)pkt[1]->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } + } +#endif + if (IAVF_CHECK_TX_LLDP(pkt[1], lldp_mode)) + hi_ctx_qw1 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + +#ifdef IAVF_TX_VLAN_QINQ_OFFLOAD + if (offload) { + if (pkt[0]->ol_flags & RTE_MBUF_F_TX_QINQ) { + uint64_t qinq_tag = vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 ? + (uint64_t)pkt[0]->vlan_tci_outer : + (uint64_t)pkt[0]->vlan_tci; + hi_ctx_qw0 |= IAVF_TX_CTX_DESC_IL2TAG2 << + IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw0 |= qinq_tag << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } else if (pkt[0]->ol_flags & RTE_MBUF_F_TX_VLAN && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + hi_ctx_qw0 |= + IAVF_TX_CTX_DESC_IL2TAG2 << IAVF_TXD_CTX_QW1_CMD_SHIFT; + low_ctx_qw0 |= + (uint64_t)pkt[0]->vlan_tci << IAVF_TXD_CTX_QW0_L2TAG2_PARAM; + } + } +#endif + if (IAVF_CHECK_TX_LLDP(pkt[0], lldp_mode)) + hi_ctx_qw0 |= IAVF_TX_CTX_DESC_SWTCH_UPLINK << IAVF_TXD_CTX_QW1_CMD_SHIFT; + + if (offload) { + iavf_txd_enable_offload(pkt[1], &hi_data_qw1, vlan_flag); + iavf_txd_enable_offload(pkt[0], &hi_data_qw0, vlan_flag); + iavf_fill_ctx_desc_tunneling_field(&low_ctx_qw1, pkt[1]); + iavf_fill_ctx_desc_tunneling_field(&low_ctx_qw0, pkt[0]); + } + + __m256i desc2_3 = + _mm256_set_epi64x + (hi_data_qw1, pkt[1]->buf_iova + pkt[1]->data_off, + hi_ctx_qw1, low_ctx_qw1); + __m256i desc0_1 = + _mm256_set_epi64x + (hi_data_qw0, pkt[0]->buf_iova + pkt[0]->data_off, + hi_ctx_qw0, low_ctx_qw0); + _mm256_store_si256(RTE_CAST_PTR(__m256i *, txdp + 2), desc2_3); + _mm256_store_si256(RTE_CAST_PTR(__m256i *, txdp), desc0_1); + } + + if (nb_pkts) + ctx_vtx1(txdp, *pkt, flags, offload, vlan_flag, lldp_mode); +} + +static __rte_always_inline uint16_t +iavf_xmit_fixed_burst_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts, bool offload) +{ + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; + volatile struct ci_tx_desc *txdp; + struct ci_tx_entry_vec *txep; + uint16_t n, nb_commit, nb_mbuf, tx_id; + /* bit2 is reserved and must be set to 1 according to Spec */ + uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; + uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; + uint8_t lldp_mode = txq->lldp_mode; + + if (txq->nb_tx_free < txq->tx_free_thresh) + ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true); + + nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1); + nb_commit &= 0xFFFE; + if (unlikely(nb_commit == 0)) + return 0; + + nb_pkts = nb_commit >> 1; + tx_id = txq->tx_tail; + txdp = &txq->ci_tx_ring[tx_id]; + txep = (void *)txq->sw_ring; + txep += (tx_id >> 1); + + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit); + n = (uint16_t)(txq->nb_tx_desc - tx_id); + + if (n != 0 && nb_commit >= n) { + nb_mbuf = n >> 1; + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_mbuf); + + ctx_vtx(txdp, tx_pkts, nb_mbuf - 1, flags, offload, txq->vlan_flag, lldp_mode); + tx_pkts += (nb_mbuf - 1); + txdp += (n - 2); + ctx_vtx1(txdp, *tx_pkts++, rs, offload, txq->vlan_flag, lldp_mode); + + nb_commit = (uint16_t)(nb_commit - n); + + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); + tx_id = 0; + /* avoid reach the end of ring */ + txdp = txq->ci_tx_ring; + txep = (void *)txq->sw_ring; + } + + nb_mbuf = nb_commit >> 1; + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_mbuf); + + ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag, lldp_mode); + tx_id = (uint16_t)(tx_id + nb_commit); + + if (tx_id > txq->tx_next_rs) { + txq->ci_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << + IAVF_TXD_QW1_CMD_SHIFT); + txq->tx_next_rs = + (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); + } + + txq->tx_tail = tx_id; + + IAVF_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail); + return nb_pkts; +} + +static __rte_always_inline uint16_t +iavf_xmit_pkts_vec_avx2_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts, bool offload) +{ + uint16_t nb_tx = 0; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; + + while (nb_pkts) { + uint16_t ret, num; + + /* cross rs_thresh boundary is not allowed */ + num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh); + num = num >> 1; + ret = iavf_xmit_fixed_burst_vec_avx2_ctx(tx_queue, &tx_pkts[nb_tx], + num, offload); + nb_tx += ret; + nb_pkts -= ret; + if (ret < num) + break; + } + + return nb_tx; +} + +uint16_t +iavf_xmit_pkts_vec_avx2_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return iavf_xmit_pkts_vec_avx2_ctx_cmn(tx_queue, tx_pkts, nb_pkts, false); +} + +uint16_t +iavf_xmit_pkts_vec_avx2_ctx_offload(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + return iavf_xmit_pkts_vec_avx2_ctx_cmn(tx_queue, tx_pkts, nb_pkts, true); +} + static __rte_always_inline uint16_t iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v5 3/3] doc: announce change to LLDP packet detection in iavf PMD 2026-04-22 9:16 ` [PATCH v5 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus @ 2026-04-22 9:16 ` Ciara Loftus 2 siblings, 0 replies; 25+ messages in thread From: Ciara Loftus @ 2026-04-22 9:16 UTC (permalink / raw) To: dev; +Cc: Ciara Loftus The iavf PMD transmit paths currently use two methods to identify if an mbuf holds an LLDP packet: a dynamic mbuf field and the mbuf ptype. The ptype method is enabled via the enable_ptype_lldp devarg. A future release will remove the dynamic mbuf field approach at which point the ptype will be the only approach used. Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> --- doc/guides/rel_notes/deprecation.rst | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 35c9b4e06c..17f90a6352 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -154,3 +154,7 @@ Deprecation Notices * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to DPDK. Those API functions are used internally by DPDK core and netvsc PMD. + +* net/iavf: The dynamic mbuf field used to detect LLDP packets on the + transmit path in the iavf PMD will be removed in a future release. + After removal, only packet type-based detection will be supported. -- 2.43.0 ^ permalink raw reply related [flat|nested] 25+ messages in thread
end of thread, other threads:[~2026-04-22 9:19 UTC | newest] Thread overview: 25+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-02-09 15:20 [PATCH 0/2] iavf: use ptype for LLDP and add AVX2 ctx paths Ciara Loftus 2026-02-09 15:20 ` [PATCH 1/2] net/iavf: use mbuf packet type instead of dynfield for LLDP Ciara Loftus 2026-02-09 16:10 ` Bruce Richardson 2026-03-06 11:49 ` Loftus, Ciara 2026-02-09 15:20 ` [PATCH 2/2] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 1/3] net/iavf: support LLDP Tx based on mbuf ptype or dynfield Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus 2026-03-06 11:52 ` [PATCH v2 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus 2026-04-17 10:08 ` [PATCH v3 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-04-17 10:08 ` [PATCH v3 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus 2026-04-17 10:57 ` Bruce Richardson 2026-04-22 9:19 ` Loftus, Ciara 2026-04-17 10:08 ` [PATCH v3 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus 2026-04-17 10:08 ` [PATCH v3 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus 2026-04-17 10:56 ` [PATCH v4 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-04-17 10:56 ` [PATCH v4 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus 2026-04-17 13:00 ` Bruce Richardson 2026-04-17 10:56 ` [PATCH v4 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus 2026-04-17 12:59 ` Bruce Richardson 2026-04-17 10:56 ` [PATCH v4 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 0/3] iavf: LLDP ptype and AVX2 ctx paths Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 1/3] net/iavf: support LLDP Tx via mbuf ptype or dynfield Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 2/3] net/iavf: add AVX2 context descriptor Tx paths Ciara Loftus 2026-04-22 9:16 ` [PATCH v5 3/3] doc: announce change to LLDP packet detection in iavf PMD Ciara Loftus
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox