* [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's
@ 2026-02-09 14:13 Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 01/12] net/ixgbe: remove MAC type check macros Anatoly Burakov
` (19 more replies)
0 siblings, 20 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev
This patchset is an assortment of cleanups for ixgbe, i40e, and iavf PMD.
IXGBE:
- Remove unnecessary macros and #ifdef's
- Disentangle unrelated flow API code paths
I40E:
- Get rid of global variables and unnecessary allocations
- Reduce code duplication around default RSS keys
- Use more appropriate integer types and definitions
IAVF:
- Remove dead code
- Remove unnecessary allocations
- Separate RSS uninit from hash flow parser uninit
This is done in preparation for further rework.
Anatoly Burakov (12):
net/ixgbe: remove MAC type check macros
net/ixgbe: remove security-related ifdefery
net/ixgbe: split security and ntuple filters
net/i40e: get rid of global filter variables
net/i40e: use stack allocations for tunnel set
net/i40e: make default RSS key global
net/i40e: use unsigned types for queue comparisons
net/i40e: use proper flex len define
net/i40e: remove global pattern variable
net/iavf: remove remnants of pipeline mode
net/iavf: do not use malloc in crypto VF commands
net/iavf: decouple hash uninit from parser uninit
drivers/net/intel/i40e/i40e_ethdev.c | 148 ++++++++---------
drivers/net/intel/i40e/i40e_ethdev.h | 24 +--
drivers/net/intel/i40e/i40e_flow.c | 146 ++++++++---------
drivers/net/intel/i40e/i40e_hash.c | 27 +++-
drivers/net/intel/i40e/i40e_hash.h | 3 +
drivers/net/intel/iavf/iavf_ethdev.c | 4 +
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 --
drivers/net/intel/iavf/iavf_hash.c | 14 +-
drivers/net/intel/iavf/iavf_hash.h | 13 ++
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 158 ++++++------------
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 -
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 16 --
drivers/net/intel/ixgbe/ixgbe_flow.c | 162 ++++++++++++-------
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 ---
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 -
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -
18 files changed, 352 insertions(+), 422 deletions(-)
create mode 100644 drivers/net/intel/iavf/iavf_hash.h
--
2.47.3
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v1 01/12] net/ixgbe: remove MAC type check macros
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 02/12] net/ixgbe: remove security-related ifdefery Anatoly Burakov
` (18 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The macros used were not informative and did not add any value beyond code
golf, so remove them and make MAC type checks explicit.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
2 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 5393c81363..55b121b15d 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -137,18 +137,6 @@
#define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
#define IXGBE_MAX_L2_TN_FILTER_NUM 128
-#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
- return -ENOTSUP;\
-} while (0)
-
-#define MAC_TYPE_FILTER_SUP(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
- (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
- (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
- return -ENOTSUP;\
-} while (0)
-
/* Link speed for X550 auto negotiation */
#define IXGBE_LINK_SPEED_X550_AUTONEG (IXGBE_LINK_SPEED_100_FULL | \
IXGBE_LINK_SPEED_1GB_FULL | \
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 27d2ba1132..a1245bb906 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -621,7 +621,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP_EXT(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540)
+ return -ENOTSUP;
ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
@@ -861,7 +863,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_ethertype_filter(attr, pattern,
actions, filter, error);
@@ -1150,7 +1158,13 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_syn_filter(attr, pattern,
actions, filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 02/12] net/ixgbe: remove security-related ifdefery
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 01/12] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 03/12] net/ixgbe: split security and ntuple filters Anatoly Burakov
` (17 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The security library is specified as explicit dependency for ixgbe, so
there is no more need to gate features behind #ifdef blocks that depend
on presence of this library.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 ------
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ---
drivers/net/intel/ixgbe/ixgbe_flow.c | 6 -----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --------------------
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 --
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -----
6 files changed, 52 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 11500a923c..57d929cf2c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -32,9 +32,7 @@
#include <rte_random.h>
#include <dev_driver.h>
#include <rte_hash_crc.h>
-#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
-#endif
#include <rte_os_shim.h>
#include "ixgbe_logs.h"
@@ -1177,11 +1175,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ixgbe_swfw_lock_reset(hw);
-#ifdef RTE_LIB_SECURITY
/* Initialize security_ctx only for primary process*/
if (ixgbe_ipsec_ctx_create(eth_dev))
return -ENOMEM;
-#endif
/* Initialize DCB configuration*/
memset(dcb_config, 0, sizeof(struct ixgbe_dcb_config));
@@ -1362,10 +1358,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
rte_free(eth_dev->data->hash_mac_addrs);
eth_dev->data->hash_mac_addrs = NULL;
err_exit:
-#ifdef RTE_LIB_SECURITY
rte_free(eth_dev->security_ctx);
eth_dev->security_ctx = NULL;
-#endif
return ret;
}
@@ -3148,10 +3142,8 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(dev);
-#ifdef RTE_LIB_SECURITY
rte_free(dev->security_ctx);
dev->security_ctx = NULL;
-#endif
return ret;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 55b121b15d..877fce697b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -14,9 +14,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
-#ifdef RTE_LIB_SECURITY
#include "ixgbe_ipsec.h"
-#endif
#include <rte_flow.h>
#include <rte_time.h>
#include <rte_hash.h>
@@ -478,9 +476,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-#ifdef RTE_LIB_SECURITY
struct ixgbe_ipsec ipsec;
-#endif
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index a1245bb906..e7521a4b1f 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,7 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
-#ifdef RTE_LIB_SECURITY
/**
* Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
*/
@@ -249,7 +248,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
item->type == RTE_FLOW_ITEM_TYPE_IPV6);
}
-#endif
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -630,11 +628,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (filter->proto == IPPROTO_ESP)
return 0;
-#endif
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
@@ -3074,11 +3070,9 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (ntuple_filter.proto == IPPROTO_ESP)
return flow;
-#endif
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
index 2857c19355..71deda9ed6 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
@@ -459,7 +459,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
-#ifdef RTE_LIB_SECURITY
if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
union ixgbe_crypto_tx_desc_md *md =
(union ixgbe_crypto_tx_desc_md *)mdata;
@@ -473,7 +472,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
tx_offload_mask.sa_idx |= ~0;
tx_offload_mask.sec_pad_len |= ~0;
}
-#endif
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -652,9 +650,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec;
-#endif
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -682,9 +678,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
-#ifdef RTE_LIB_SECURITY
use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
-#endif
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -696,7 +690,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
-#ifdef RTE_LIB_SECURITY
if (use_ipsec) {
union ixgbe_crypto_tx_desc_md *ipsec_mdata =
(union ixgbe_crypto_tx_desc_md *)
@@ -704,7 +697,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.sa_idx = ipsec_mdata->sa_idx;
tx_offload.sec_pad_len = ipsec_mdata->pad_len;
}
-#endif
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -896,10 +888,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
-#ifdef RTE_LIB_SECURITY
if (use_ipsec)
olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
-#endif
m_seg = tx_pkt;
do {
@@ -1523,13 +1513,11 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
}
-#ifdef RTE_LIB_SECURITY
if (rx_status & IXGBE_RXD_STAT_SECP) {
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
}
-#endif
return pkt_flags;
}
@@ -2490,9 +2478,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST) {
if (txq->tx_rs_thresh <= IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
@@ -2648,9 +2634,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
(txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = rte_eth_tx_pkt_prepare_dummy;
@@ -2711,10 +2695,8 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
-#endif
return tx_offload_capa;
}
@@ -2884,10 +2866,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
-#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
RTE_ETH_TX_OFFLOAD_SECURITY);
-#endif
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -3171,10 +3151,8 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
-#endif
return offloads;
}
@@ -5102,10 +5080,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ci_rx_queue *rxq = dev->data->rx_queues[i];
rxq->vector_rx = rx_using_sse;
-#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY);
-#endif
}
}
@@ -5611,7 +5587,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
ixgbe_setup_loopback_link_x540_x550(hw, true);
}
-#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
@@ -5624,7 +5599,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
return ret;
}
}
-#endif
return 0;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.h b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
index 7950e56ee4..33023a3580 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
@@ -99,11 +99,9 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
-#ifdef RTE_LIB_SECURITY
/* inline ipsec related*/
uint64_t sa_idx:8; /**< TX SA database entry index */
uint64_t sec_pad_len:4; /**< padding length */
-#endif
};
};
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
index dca3a20ca0..3f37038e5c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -21,7 +21,6 @@ ixgbe_rxq_rearm(struct ci_rx_queue *rxq)
ci_rxq_rearm(rxq, CI_RX_VEC_LEVEL_SSE);
}
-#ifdef RTE_LIB_SECURITY
static inline void
desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
@@ -56,7 +55,6 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
*rearm2 = _mm_extract_epi32(rearm, 2);
*rearm3 = _mm_extract_epi32(rearm, 3);
}
-#endif
static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
@@ -265,9 +263,7 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ci_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec = rxq->using_ipsec;
-#endif
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -444,10 +440,8 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
desc_to_olflags_v(descs, mbuf_init, vlan_flags, udp_p_flag,
&rx_pkts[pos]);
-#ifdef RTE_LIB_SECURITY
if (unlikely(use_ipsec))
desc_to_olflags_v_ipsec(descs, &rx_pkts[pos]);
-#endif
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 03/12] net/ixgbe: split security and ntuple filters
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 01/12] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 02/12] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 04/12] net/i40e: get rid of global filter variables Anatoly Burakov
` (16 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
These filters are mashed together even though they almost do not share any
code at all between each other. Separate security filter from ntuple filter
and parse it separately.
While we're at it, we're making checks more stringent (such as checking for
NULL conf), and more type safe.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 136 ++++++++++++++++++---------
1 file changed, 91 insertions(+), 45 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index e7521a4b1f..fee5d7b901 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,41 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- act = next_no_void_action(actions, NULL);
- if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- const void *conf = act->conf;
- /* check if the next not void item is END */
- act = next_no_void_action(actions, act);
- if (act->type != RTE_FLOW_ACTION_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "Not supported action.");
- return -rte_errno;
- }
-
- /* get the IP pattern*/
- item = next_no_void_pattern(pattern, NULL);
- while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- if (item->last ||
- item->type == RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "IP pattern missing.");
- return -rte_errno;
- }
- item = next_no_void_pattern(pattern, item);
- }
-
- filter->proto = IPPROTO_ESP;
- return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
- item->type == RTE_FLOW_ITEM_TYPE_IPV6);
- }
-
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -607,6 +572,81 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return 0;
}
+static int __rte_unused
+ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct rte_flow_action_security *security;
+ const struct rte_flow_item *item;
+ const struct rte_flow_action *act;
+
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error,
+ EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "NULL pattern.");
+ return -rte_errno;
+ }
+ if (actions == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "NULL action.");
+ return -rte_errno;
+ }
+ if (attr == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR,
+ NULL, "NULL attribute.");
+ return -rte_errno;
+ }
+
+ /* check if next non-void action is security */
+ act = next_no_void_action(actions, NULL);
+ if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+ security = act->conf;
+ if (security == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "NULL security action config.");
+ }
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+
+ return ixgbe_crypto_add_ingress_sa_from_flow(security->security_session,
+ item->spec, item->type == RTE_FLOW_ITEM_TYPE_IPV6);
+}
+
/* a specific function for ixgbe because the flags is specific */
static int
ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
@@ -628,10 +668,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- /* ESP flow not really a flow*/
- if (filter->proto == IPPROTO_ESP)
- return 0;
-
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -3066,14 +3102,17 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
-
- /* ESP flow not really a flow*/
- if (ntuple_filter.proto == IPPROTO_ESP)
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
return flow;
+ memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
+ actions, &ntuple_filter, error);
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
@@ -3297,6 +3336,13 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
+ return 0;
+
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 04/12] net/i40e: get rid of global filter variables
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (2 preceding siblings ...)
2026-02-09 14:13 ` [PATCH v1 03/12] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 05/12] net/i40e: use stack allocations for tunnel set Anatoly Burakov
` (15 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, i40e driver relies on global state to work around the fact that
`rte_flow_validate()` is being called directly from `rte_flow_create()`,
and it not being possible to pass state between two functions. Fix that by
making a small wrapper around validation that will create a dummy context.
Additionally, tunnel filter doesn't appear to be used by anything and so is
omitted from the structure.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 16 ++--
drivers/net/intel/i40e/i40e_flow.c | 117 ++++++++++++++-------------
2 files changed, 68 insertions(+), 65 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index cab6d7e9dc..0de036f2d9 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1304,12 +1304,14 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
-union i40e_filter_t {
- struct rte_eth_ethertype_filter ethertype_filter;
- struct i40e_fdir_filter_conf fdir_filter;
- struct rte_eth_tunnel_filter_conf tunnel_filter;
- struct i40e_tunnel_filter_conf consistent_tunnel_filter;
- struct i40e_rte_flow_rss_conf rss_conf;
+struct i40e_filter_ctx {
+ union {
+ struct rte_eth_ethertype_filter ethertype_filter;
+ struct i40e_fdir_filter_conf fdir_filter;
+ struct i40e_tunnel_filter_conf consistent_tunnel_filter;
+ struct i40e_rte_flow_rss_conf rss_conf;
+ };
+ enum rte_filter_type type;
};
typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
@@ -1317,7 +1319,7 @@ typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
struct i40e_valid_pattern {
enum rte_flow_item_type *items;
parse_filter_t parse_filter;
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2374b9bbca..e611de0c06 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -80,37 +80,37 @@ static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
struct i40e_ethertype_filter *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
@@ -124,7 +124,7 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int
i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
@@ -136,7 +136,7 @@ static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
const struct rte_flow_ops i40e_flow_ops = {
.validate = i40e_flow_validate,
.create = i40e_flow_create,
@@ -145,8 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-static union i40e_filter_t cons_filter;
-static enum rte_filter_type cons_filter_type = RTE_ETH_FILTER_NONE;
/* internal pattern w/o VOID items */
struct rte_flow_item g_items[32];
@@ -1454,10 +1452,9 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct rte_eth_ethertype_filter *ethertype_filter =
- &filter->ethertype_filter;
+ struct rte_eth_ethertype_filter *ethertype_filter = &filter->ethertype_filter;
int ret;
ret = i40e_flow_parse_ethertype_pattern(dev, pattern, error,
@@ -1474,7 +1471,7 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_ETHERTYPE;
+ filter->type = RTE_ETH_FILTER_ETHERTYPE;
return ret;
}
@@ -2549,7 +2546,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
struct i40e_fdir_filter_conf *fdir_filter = &filter->fdir_filter;
int ret;
@@ -2566,7 +2563,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_FDIR;
+ filter->type = RTE_ETH_FILTER_FDIR;
return 0;
}
@@ -2834,10 +2831,9 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_l4_pattern(pattern, error, tunnel_filter);
@@ -2852,7 +2848,7 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3086,10 +3082,9 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_vxlan_pattern(dev, pattern,
@@ -3105,7 +3100,7 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3338,10 +3333,9 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_nvgre_pattern(dev, pattern,
@@ -3357,7 +3351,7 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3495,10 +3489,9 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_mpls_pattern(dev, pattern,
@@ -3514,7 +3507,7 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3648,10 +3641,9 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_gtp_pattern(dev, pattern,
@@ -3667,7 +3659,7 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3766,10 +3758,9 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_qinq_pattern(dev, pattern,
@@ -3785,16 +3776,17 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
static int
-i40e_flow_validate(struct rte_eth_dev *dev,
+i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
+ struct i40e_filter_ctx *filter_ctx,
struct rte_flow_error *error)
{
struct rte_flow_item *items; /* internal pattern w/o VOID items */
@@ -3823,7 +3815,6 @@ i40e_flow_validate(struct rte_eth_dev *dev,
NULL, "NULL attribute.");
return -rte_errno;
}
- memset(&cons_filter, 0, sizeof(cons_filter));
/* Get the non-void item of action */
while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID)
@@ -3834,9 +3825,8 @@ i40e_flow_validate(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_HASH;
- return i40e_hash_parse(dev, pattern, actions + i,
- &cons_filter.rss_conf, error);
+ filter_ctx->type = RTE_ETH_FILTER_HASH;
+ return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error);
}
i = 0;
@@ -3878,8 +3868,7 @@ i40e_flow_validate(struct rte_eth_dev *dev,
}
if (parse_filter)
- ret = parse_filter(dev, attr, items, actions,
- error, &cons_filter);
+ ret = parse_filter(dev, attr, items, actions, error, filter_ctx);
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
@@ -3890,6 +3879,19 @@ i40e_flow_validate(struct rte_eth_dev *dev,
return ret;
}
+static int
+i40e_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ /* creates dummy context */
+ struct i40e_filter_ctx filter_ctx = {0};
+
+ return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
+}
+
static struct rte_flow *
i40e_flow_create(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -3898,15 +3900,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_filter_ctx filter_ctx = {0};
struct rte_flow *flow = NULL;
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
- ret = i40e_flow_validate(dev, attr, pattern, actions, error);
+ ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
if (ret < 0)
return NULL;
- if (cons_filter_type == RTE_ETH_FILTER_FDIR) {
+ if (filter_ctx.type == RTE_ETH_FILTER_FDIR) {
/* if this is the first time we're creating an fdir flow */
if (pf->fdir.fdir_vsi == NULL) {
ret = i40e_fdir_setup(pf);
@@ -3947,18 +3950,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
}
- switch (cons_filter_type) {
+ switch (filter_ctx.type) {
case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_set(pf,
- &cons_filter.ethertype_filter, 1);
+ ret = i40e_ethertype_filter_set(pf, &filter_ctx.ethertype_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->ethertype.ethertype_list,
i40e_ethertype_filter_list);
break;
case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev,
- &cons_filter.fdir_filter, 1);
+ ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
@@ -3966,14 +3967,14 @@ i40e_flow_create(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_dev_consistent_tunnel_filter_set(pf,
- &cons_filter.consistent_tunnel_filter, 1);
+ &filter_ctx.consistent_tunnel_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->tunnel.tunnel_list,
i40e_tunnel_filter_list);
break;
case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_create(pf, &cons_filter.rss_conf);
+ ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->rss_config_list,
@@ -3983,7 +3984,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
goto free_flow;
}
- flow->filter_type = cons_filter_type;
+ flow->filter_type = filter_ctx.type;
TAILQ_INSERT_TAIL(&pf->flow_list, flow, node);
return flow;
@@ -3992,7 +3993,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
"Failed to create flow.");
- if (cons_filter_type != RTE_ETH_FILTER_FDIR)
+ if (filter_ctx.type != RTE_ETH_FILTER_FDIR)
rte_free(flow);
else
i40e_fdir_entry_pool_put(fdir_info, flow);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 05/12] net/i40e: use stack allocations for tunnel set
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (3 preceding siblings ...)
2026-02-09 14:13 ` [PATCH v1 04/12] net/i40e: get rid of global filter variables Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-10 10:56 ` Burakov, Anatoly
2026-02-09 14:13 ` [PATCH v1 06/12] net/i40e: make default RSS key global Anatoly Burakov
` (14 subsequent siblings)
19 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, the tunnel "set" function is using rte_zmalloc to allocate a
temporary variable. It is actually not needed in this context and can be
avoided entirely and replaced with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 124 ++++++++++++---------------
1 file changed, 53 insertions(+), 71 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index c8153f3351..f327a3927f 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8511,38 +8511,27 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
struct i40e_pf_vf *vf = NULL;
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_vsi *vsi;
- struct i40e_aqc_cloud_filters_element_bb *cld_filter;
- struct i40e_aqc_cloud_filters_element_bb *pfilter;
+ struct i40e_aqc_cloud_filters_element_bb cld_filter = {0};
struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *tunnel, *node;
+ struct i40e_tunnel_filter *node;
struct i40e_tunnel_filter check_filter; /* Check if filter exists */
uint32_t teid_le;
bool big_buffer = 0;
- cld_filter = rte_zmalloc("tunnel_filter",
- sizeof(struct i40e_aqc_add_rm_cloud_filt_elem_ext),
- 0);
-
- if (cld_filter == NULL) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- return -ENOMEM;
- }
- pfilter = cld_filter;
-
rte_ether_addr_copy(&tunnel_filter->outer_mac,
- (struct rte_ether_addr *)&pfilter->element.outer_mac);
+ (struct rte_ether_addr *)&cld_filter.element.outer_mac);
rte_ether_addr_copy(&tunnel_filter->inner_mac,
- (struct rte_ether_addr *)&pfilter->element.inner_mac);
+ (struct rte_ether_addr *)&cld_filter.element.inner_mac);
- pfilter->element.inner_vlan =
+ cld_filter.element.inner_vlan =
rte_cpu_to_le_16(tunnel_filter->inner_vlan);
if (tunnel_filter->ip_type == I40E_TUNNEL_IPTYPE_IPV4) {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v4.data,
&ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
+ sizeof(cld_filter.element.ipaddr.v4.data));
} else {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
for (i = 0; i < 4; i++) {
@@ -8550,9 +8539,9 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
rte_cpu_to_le_32(rte_be_to_cpu_32(
tunnel_filter->ip_addr.ipv6_addr[i]));
}
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v6.data,
&convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
+ sizeof(cld_filter.element.ipaddr.v6.data));
}
/* check tunneled type */
@@ -8573,11 +8562,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x40;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOUDP;
@@ -8589,11 +8578,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x0;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOGRE;
@@ -8605,11 +8594,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8620,11 +8609,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8641,8 +8630,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
* Big Buffer should be set, see changes in
* i40e_aq_add_cloud_filters
*/
- pfilter->general_fields[0] = tunnel_filter->inner_vlan;
- pfilter->general_fields[1] = tunnel_filter->outer_vlan;
+ cld_filter.general_fields[0] = tunnel_filter->inner_vlan;
+ cld_filter.general_fields[1] = tunnel_filter->outer_vlan;
big_buffer = 1;
break;
case I40E_CLOUD_TYPE_UDP:
@@ -8657,20 +8646,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->sport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
} else {
@@ -8682,20 +8671,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->dport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
}
@@ -8704,48 +8693,46 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
default:
/* Other tunnel types is not supported. */
PMD_DRV_LOG(ERR, "tunnel type is not supported.");
- rte_free(cld_filter);
return -EINVAL;
}
if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoUDP)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoGRE)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPC)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPU)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_QINQ)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_SCTP) {
if (tunnel_filter->l4_port_type == I40E_L4_PORT_TYPE_SRC)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
} else {
val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
- &pfilter->element.flags);
+ &cld_filter.element.flags);
if (val < 0) {
- rte_free(cld_filter);
return -EINVAL;
}
}
- pfilter->element.flags |= rte_cpu_to_le_16(
- I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE |
- ip_type | (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
- pfilter->element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->element.queue_number =
+ cld_filter.element.flags |=
+ rte_cpu_to_le_16(I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+ (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
+ cld_filter.element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
+ cld_filter.element.queue_number =
rte_cpu_to_le_16(tunnel_filter->queue_id);
if (!tunnel_filter->is_to_vf)
@@ -8753,7 +8740,6 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
else {
if (tunnel_filter->vf_id >= pf->vf_num) {
PMD_DRV_LOG(ERR, "Invalid argument.");
- rte_free(cld_filter);
return -EINVAL;
}
vf = &pf->vfs[tunnel_filter->vf_id];
@@ -8762,38 +8748,36 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
/* Check if there is the filter in SW list */
memset(&check_filter, 0, sizeof(check_filter));
- i40e_tunnel_filter_convert(cld_filter, &check_filter);
+ i40e_tunnel_filter_convert(&cld_filter, &check_filter);
check_filter.is_to_vf = tunnel_filter->is_to_vf;
check_filter.vf_id = tunnel_filter->vf_id;
node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &check_filter.input);
if (add && node) {
PMD_DRV_LOG(ERR, "Conflict with existing tunnel rules!");
- rte_free(cld_filter);
return -EINVAL;
}
if (!add && !node) {
PMD_DRV_LOG(ERR, "There's no corresponding tunnel filter!");
- rte_free(cld_filter);
return -EINVAL;
}
if (add) {
+ struct i40e_tunnel_filter *tunnel;
+
if (big_buffer)
ret = i40e_aq_add_cloud_filters_bb(hw,
- vsi->seid, cld_filter, 1);
+ vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_add_cloud_filters(hw,
- vsi->seid, &cld_filter->element, 1);
+ vsi->seid, &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to add a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
tunnel = rte_zmalloc("tunnel_filter", sizeof(*tunnel), 0);
if (tunnel == NULL) {
PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- rte_free(cld_filter);
return -ENOMEM;
}
@@ -8804,19 +8788,17 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
} else {
if (big_buffer)
ret = i40e_aq_rem_cloud_filters_bb(
- hw, vsi->seid, cld_filter, 1);
+ hw, vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter->element, 1);
+ &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to delete a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
ret = i40e_sw_tunnel_filter_del(pf, &node->input);
}
- rte_free(cld_filter);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 06/12] net/i40e: make default RSS key global
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (4 preceding siblings ...)
2026-02-09 14:13 ` [PATCH v1 05/12] net/i40e: use stack allocations for tunnel set Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 07/12] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
` (13 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, there are multiple places where we need a default RSS key, but
in each of those places we define it as a local variable. Make it global
constant, and adjust all callers to use the global constant. When dealing
with adminq, we cannot send down the constant because adminq commands do
not guarantee const-ness, so copy RSS key into a local buffer before
sending it down to hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 22 ++++++++++------------
drivers/net/intel/i40e/i40e_hash.c | 23 +++++++++++++++++------
drivers/net/intel/i40e/i40e_hash.h | 3 +++
3 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index f327a3927f..770c78473e 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9064,23 +9064,21 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
int
i40e_pf_reset_rss_key(struct i40e_pf *pf)
{
- const uint8_t key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- uint8_t *rss_key;
+ uint8_t key_buf[I40E_RSS_KEY_LEN];
+ const uint8_t *rss_key;
/* Reset key */
rss_key = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key;
- if (!rss_key ||
- pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < key_len) {
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+ if (!rss_key || pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < sizeof(key_buf))
+ rss_key = i40e_rss_key_default;
- rss_key = (uint8_t *)rss_key_default;
- }
+ /*
+ * adminq does not guarantee const-ness of RSS key once a command is sent down, so make a
+ * local copy.
+ */
+ memcpy(&key_buf, rss_key, sizeof(key_buf));
- return i40e_set_rss_key(pf->main_vsi, rss_key, key_len);
+ return i40e_set_rss_key(pf->main_vsi, key_buf, sizeof(key_buf));
}
static int
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index 3149682197..f20b40e7d0 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -233,6 +233,22 @@ struct i40e_hash_match_pattern {
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+const uint8_t i40e_rss_key_default[] = {
+ 0x44, 0x39, 0x79, 0x6b,
+ 0xb5, 0x4c, 0x50, 0x23,
+ 0xb6, 0x75, 0xea, 0x5b,
+ 0x12, 0x4f, 0x9f, 0x30,
+ 0xb8, 0xa2, 0xc0, 0x3d,
+ 0xdf, 0xdc, 0x4d, 0x02,
+ 0xa0, 0x8c, 0x9b, 0x33,
+ 0x4a, 0xf6, 0x4a, 0x4c,
+ 0x05, 0xc6, 0xfa, 0x34,
+ 0x39, 0x58, 0xd8, 0x55,
+ 0x7d, 0x99, 0x58, 0x3a,
+ 0xe1, 0x38, 0xc9, 0x2e,
+ 0x81, 0x15, 0x03, 0x66
+};
+
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
*/
@@ -910,17 +926,12 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act,
const uint8_t *key = rss_act->key;
if (!key || rss_act->key_len != sizeof(rss_conf->key)) {
- const uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
-
if (rss_act->key_len != sizeof(rss_conf->key))
PMD_DRV_LOG(WARNING,
"RSS key length invalid, must be %u bytes, now set key to default",
(uint32_t)sizeof(rss_conf->key));
- memcpy(rss_conf->key, rss_key_default, sizeof(rss_conf->key));
+ memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key));
} else {
memcpy(rss_conf->key, key, sizeof(rss_conf->key));
}
diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h
index ff8c91c030..2513d84565 100644
--- a/drivers/net/intel/i40e/i40e_hash.h
+++ b/drivers/net/intel/i40e/i40e_hash.h
@@ -27,6 +27,9 @@ int i40e_hash_filter_destroy(struct i40e_pf *pf,
const struct i40e_rss_filter *rss_filter);
int i40e_hash_filter_flush(struct i40e_pf *pf);
+#define I40E_RSS_KEY_LEN ((I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t))
+extern const uint8_t i40e_rss_key_default[I40E_RSS_KEY_LEN];
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 07/12] net/i40e: use unsigned types for queue comparisons
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (5 preceding siblings ...)
2026-02-09 14:13 ` [PATCH v1 06/12] net/i40e: make default RSS key global Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 08/12] net/i40e: use proper flex len define Anatoly Burakov
` (12 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when we compare queue numbers against maximum traffic class
value of 64, we do not use unsigned values, which results in compiler
warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
value. Make it unsigned, and adjust callers to use correct types. As a
consequence, `i40e_align_floor` now returns unsigned value as well - this
is correct, because nothing about that function implies signed usage being
a valid use case.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 2 +-
drivers/net/intel/i40e/i40e_ethdev.h | 6 +++---
drivers/net/intel/i40e/i40e_hash.c | 4 ++--
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 770c78473e..06430e6319 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9040,7 +9040,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
struct i40e_hw *hw = &pf->adapter->hw;
uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
- int num;
+ size_t num;
/* If both VMDQ and RSS enabled, not all of PF queues are
* configured. It's necessary to calculate the actual PF
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 0de036f2d9..d144297360 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -24,7 +24,7 @@
#define I40E_AQ_LEN 32
#define I40E_AQ_BUF_SZ 4096
/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
-#define I40E_MAX_Q_PER_TC 64
+#define I40E_MAX_Q_PER_TC 64U
#define I40E_NUM_DESC_DEFAULT 512
#define I40E_NUM_DESC_ALIGN 32
#define I40E_BUF_SIZE_MIN 1024
@@ -1517,8 +1517,8 @@ i40e_init_adminq_parameter(struct i40e_hw *hw)
hw->aq.asq_buf_size = I40E_AQ_BUF_SZ;
}
-static inline int
-i40e_align_floor(int n)
+static inline uint32_t
+i40e_align_floor(uint32_t n)
{
if (n == 0)
return 0;
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index f20b40e7d0..cbb377295d 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -949,7 +949,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
struct i40e_pf *pf;
struct i40e_hw *hw;
uint16_t i;
- int max_queue;
+ size_t max_queue;
hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (!rss_act->queue_num ||
@@ -971,7 +971,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
for (i = 0; i < rss_act->queue_num; i++) {
- if ((int)rss_act->queue[i] >= max_queue)
+ if (rss_act->queue[i] >= max_queue)
break;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 08/12] net/i40e: use proper flex len define
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (6 preceding siblings ...)
2026-02-09 14:13 ` [PATCH v1 07/12] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 09/12] net/i40e: remove global pattern variable Anatoly Burakov
` (11 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Bruce Richardson
For FDIR, we have byte arrays that are supposed to be limited to whatever
the HW supports in terms of flex descriptor matching. However, in the
structure definition, spec and mask bytes are using different array length
defines, and the only reason why this works is because they evaluate to the
same value.
Use the i40e-specific definition instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index d144297360..025901edb6 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -631,7 +631,7 @@ struct i40e_fdir_flex_pit {
/* A structure used to contain extend input of flow */
struct i40e_fdir_flow_ext {
uint16_t vlan_tci;
- uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
+ uint8_t flexbytes[I40E_FDIR_MAX_FLEX_LEN];
/* It is filled by the flexible payload to match. */
uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN];
uint8_t raw_id;
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 09/12] net/i40e: remove global pattern variable
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (7 preceding siblings ...)
2026-02-09 14:13 ` [PATCH v1 08/12] net/i40e: use proper flex len define Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 10/12] net/iavf: remove remnants of pipeline mode Anatoly Burakov
` (10 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Bruce Richardson
When parsing flow patterns, current code cleans up the pattern list by
removing void flow items, and copies the patterns into an array. That
array, when dealing with under 32 flow items, is allocated on the stack,
but when the pattern is big enough, a new list is dynamically allocated.
This allocated list is a global variable, and is allocated using
rte_zmalloc call which seems like overkill for this use case.
Remove the global variable, and replace the split behavior with
unconditional allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 29 +++++++++--------------------
1 file changed, 9 insertions(+), 20 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index e611de0c06..c5bb787f28 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -145,9 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-/* internal pattern w/o VOID items */
-struct rte_flow_item g_items[32];
-
/* Pattern matched ethertype filter */
static enum rte_flow_item_type pattern_ethertype[] = {
RTE_FLOW_ITEM_TYPE_ETH,
@@ -3837,19 +3834,13 @@ i40e_flow_check(struct rte_eth_dev *dev,
i++;
}
item_num++;
-
- if (item_num <= ARRAY_SIZE(g_items)) {
- items = g_items;
- } else {
- items = rte_zmalloc("i40e_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
- if (!items) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL,
- "No memory for PMD internal items.");
- return -ENOMEM;
- }
+ items = calloc(item_num, sizeof(struct rte_flow_item));
+ if (items == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL,
+ "No memory for PMD internal items.");
+ return -ENOMEM;
}
i40e_pattern_skip_void_item(items, pattern);
@@ -3862,8 +3853,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- if (items != g_items)
- rte_free(items);
+ free(items);
return -rte_errno;
}
@@ -3873,8 +3863,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
- if (items != g_items)
- rte_free(items);
+ free(items);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 10/12] net/iavf: remove remnants of pipeline mode
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (8 preceding siblings ...)
2026-02-09 14:13 ` [PATCH v1 09/12] net/i40e: remove global pattern variable Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 11/12] net/iavf: do not use malloc in crypto VF commands Anatoly Burakov
` (9 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
When pipeline mode was removed, some of the things used by pipelines were
left in the code. Remove them as they are unused.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
drivers/net/intel/iavf/iavf_hash.c | 1 -
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
5 files changed, 19 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c
index 0ef6e0d04a..9eae874800 100644
--- a/drivers/net/intel/iavf/iavf_fdir.c
+++ b/drivers/net/intel/iavf/iavf_fdir.c
@@ -1632,7 +1632,6 @@ static struct iavf_flow_parser iavf_fdir_parser = {
.array = iavf_fdir_pattern,
.array_len = RTE_DIM(iavf_fdir_pattern),
.parse_pattern_action = iavf_fdir_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fdir_engine_register)
diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c
index cf1030320f..bfb34695de 100644
--- a/drivers/net/intel/iavf/iavf_fsub.c
+++ b/drivers/net/intel/iavf/iavf_fsub.c
@@ -814,7 +814,6 @@ iavf_flow_parser iavf_fsub_parser = {
.array = iavf_fsub_pattern_list,
.array_len = RTE_DIM(iavf_fsub_pattern_list),
.parse_pattern_action = iavf_fsub_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fsub_engine_init)
diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h
index b11bb4cf2b..b97cf8b7ff 100644
--- a/drivers/net/intel/iavf/iavf_generic_flow.h
+++ b/drivers/net/intel/iavf/iavf_generic_flow.h
@@ -485,20 +485,6 @@ enum iavf_flow_engine_type {
IAVF_FLOW_ENGINE_MAX,
};
-/**
- * classification stages.
- * for non-pipeline mode, we have two classification stages: Distributor/RSS
- * for pipeline-mode we have three classification stages:
- * Permission/Distributor/RSS
- */
-enum iavf_flow_classification_stage {
- IAVF_FLOW_STAGE_NONE = 0,
- IAVF_FLOW_STAGE_IPSEC_CRYPTO,
- IAVF_FLOW_STAGE_RSS,
- IAVF_FLOW_STAGE_DISTRIBUTOR,
- IAVF_FLOW_STAGE_MAX,
-};
-
/* Struct to store engine created. */
struct iavf_flow_engine {
TAILQ_ENTRY(iavf_flow_engine) node;
@@ -527,7 +513,6 @@ struct iavf_flow_parser {
struct iavf_pattern_match_item *array;
uint32_t array_len;
parse_pattern_action_t parse_pattern_action;
- enum iavf_flow_classification_stage stage;
};
/* Struct to store parser created. */
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index 1725c2b2b9..a40fed7542 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -691,7 +691,6 @@ static struct iavf_flow_parser iavf_hash_parser = {
.array = iavf_hash_pattern_list,
.array_len = RTE_DIM(iavf_hash_pattern_list),
.parse_pattern_action = iavf_hash_parse_pattern_action,
- .stage = IAVF_FLOW_STAGE_RSS,
};
int
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 6d41b1744e..66eaea8715 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -1983,7 +1983,6 @@ static struct iavf_flow_parser iavf_ipsec_flow_parser = {
.array = iavf_ipsec_flow_pattern,
.array_len = RTE_DIM(iavf_ipsec_flow_pattern),
.parse_pattern_action = iavf_ipsec_flow_parse,
- .stage = IAVF_FLOW_STAGE_IPSEC_CRYPTO,
};
RTE_INIT(iavf_ipsec_flow_engine_register)
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 11/12] net/iavf: do not use malloc in crypto VF commands
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (9 preceding siblings ...)
2026-02-09 14:13 ` [PATCH v1 10/12] net/iavf: remove remnants of pipeline mode Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 12/12] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
` (8 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when calling down into the VF mailbox, IPsec code will use
dynamic memory allocation (rte_malloc one at that!) to allocate VF message
structures which are ~40 bytes in size, and then immediately frees them.
This is wasteful and unnecessary, so use stack allocation instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 157 +++++++--------------
1 file changed, 51 insertions(+), 106 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 66eaea8715..cb437d3212 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -458,36 +458,24 @@ static uint32_t
iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
struct rte_security_session_conf *conf)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- struct virtchnl_ipsec_sa_cfg *sa_cfg;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg sa_cfg;
+ } sa_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp;
+ } sa_resp;
+ struct inline_ipsec_msg *request = &sa_req.msg, *response = &sa_resp.msg;
+ struct virtchnl_ipsec_sa_cfg *sa_cfg = &sa_req.sa_cfg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg);
-
- request = rte_malloc("iavf-sad-add-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg_resp);
- response = rte_malloc("iavf-sad-add-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
/* set SA configuration params */
- sa_cfg = (struct virtchnl_ipsec_sa_cfg *)(request + 1);
-
sa_cfg->spi = conf->ipsec.spi;
sa_cfg->virtchnl_protocol_type = VIRTCHNL_PROTO_ESP;
sa_cfg->virtchnl_direction =
@@ -541,10 +529,10 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sa_req),
+ (uint8_t *)response, sizeof(sa_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -552,9 +540,6 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
rc = -EFAULT;
else
rc = response->ipsec_data.sa_cfg_resp->sa_handle;
-update_cleanup:
- rte_free(response);
- rte_free(request);
return rc;
}
@@ -722,19 +707,18 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
bool is_udp,
uint16_t udp_port)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg sp_cfg;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg_resp sp_cfg_resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg);
- request = rte_malloc("iavf-inbound-security-policy-add-request",
- request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -768,21 +752,12 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
request->ipsec_data.sp_cfg->is_udp = is_udp;
request->ipsec_data.sp_cfg->udp_port = htons(udp_port);
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg_resp);
- response = rte_malloc("iavf-inbound-security-policy-add-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -791,10 +766,6 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.sp_cfg_resp->rule_id;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -802,26 +773,17 @@ static uint32_t
iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_update sa_update;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp ipsec_resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg, *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_update);
- request = rte_malloc("iavf-sa-update-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-update-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_UPDATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -833,10 +795,10 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -845,10 +807,6 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.ipsec_resp->resp;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -899,26 +857,17 @@ int
iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
uint8_t is_v4, uint32_t flow_id)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_destroy sp_destroy;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg, *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_destroy);
- request = rte_malloc("iavf-sp-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sp-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -931,21 +880,17 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
response->req_id != request->req_id)
rc = -EFAULT;
else
- return response->ipsec_data.ipsec_status->status;
-
-update_cleanup:
- rte_free(request);
- rte_free(response);
+ rc = response->ipsec_data.ipsec_status->status;
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v1 12/12] net/iavf: decouple hash uninit from parser uninit
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (10 preceding siblings ...)
2026-02-09 14:13 ` [PATCH v1 11/12] net/iavf: do not use malloc in crypto VF commands Anatoly Burakov
@ 2026-02-09 14:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (7 subsequent siblings)
19 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-09 14:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, parser deinitialization will trigger removal of current RSS
configuration. This should not be done as part of parser deinitialization,
but should rather be a separate step in dev close flow.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++++
drivers/net/intel/iavf/iavf_hash.c | 13 +++++++++----
drivers/net/intel/iavf/iavf_hash.h | 13 +++++++++++++
3 files changed, 26 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/intel/iavf/iavf_hash.h
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 802e095174..70eb7e7ec5 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -35,6 +35,7 @@
#include "iavf_generic_flow.h"
#include "rte_pmd_iavf.h"
#include "iavf_ipsec_crypto.h"
+#include "iavf_hash.h"
/* devargs */
#define IAVF_PROTO_XTR_ARG "proto_xtr"
@@ -2972,6 +2973,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
/* free iAVF security device context all related resources */
iavf_security_ctx_destroy(adapter);
+ /* remove RSS configuration */
+ iavf_hash_uninit(adapter);
+
iavf_flow_flush(dev, NULL);
iavf_flow_uninit(adapter);
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index a40fed7542..d864998402 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -22,6 +22,7 @@
#include "iavf_log.h"
#include "iavf.h"
#include "iavf_generic_flow.h"
+#include "iavf_hash.h"
#define IAVF_PHINT_NONE 0
#define IAVF_PHINT_GTPU BIT_ULL(0)
@@ -77,7 +78,7 @@ static int
iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
struct rte_flow_error *error);
static void
-iavf_hash_uninit(struct iavf_adapter *ad);
+iavf_hash_uninit_parser(struct iavf_adapter *ad);
static void
iavf_hash_free(struct rte_flow *flow);
static int
@@ -680,7 +681,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
.init = iavf_hash_init,
.create = iavf_hash_create,
.destroy = iavf_hash_destroy,
- .uninit = iavf_hash_uninit,
+ .uninit = iavf_hash_uninit_parser,
.free = iavf_hash_free,
.type = IAVF_FLOW_ENGINE_HASH,
};
@@ -1641,6 +1642,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
}
static void
+iavf_hash_uninit_parser(struct iavf_adapter *ad)
+{
+ iavf_unregister_parser(&iavf_hash_parser, ad);
+}
+
+void
iavf_hash_uninit(struct iavf_adapter *ad)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
@@ -1658,8 +1665,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
PMD_DRV_LOG(ERR, "fail to delete default RSS");
-
- iavf_unregister_parser(&iavf_hash_parser, ad);
}
static void
diff --git a/drivers/net/intel/iavf/iavf_hash.h b/drivers/net/intel/iavf/iavf_hash.h
new file mode 100644
index 0000000000..2348f32673
--- /dev/null
+++ b/drivers/net/intel/iavf/iavf_hash.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Intel Corporation
+ */
+
+#ifndef _IAVF_HASH_H_
+#define _IAVF_HASH_H_
+
+#include "iavf.h"
+
+void
+iavf_hash_uninit(struct iavf_adapter *ad);
+
+#endif /* _IAVF_HASH_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* Re: [PATCH v1 05/12] net/i40e: use stack allocations for tunnel set
2026-02-09 14:13 ` [PATCH v1 05/12] net/i40e: use stack allocations for tunnel set Anatoly Burakov
@ 2026-02-10 10:56 ` Burakov, Anatoly
0 siblings, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-10 10:56 UTC (permalink / raw)
To: dev, Bruce Richardson
On 2/9/2026 3:13 PM, Anatoly Burakov wrote:
> Currently, the tunnel "set" function is using rte_zmalloc to allocate a
> temporary variable. It is actually not needed in this context and can be
> avoided entirely and replaced with stack allocation.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
I've actually found quite a few more of these so I'll send them out as a
separate patchset.
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (11 preceding siblings ...)
2026-02-09 14:13 ` [PATCH v1 12/12] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
` (25 more replies)
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (6 subsequent siblings)
19 siblings, 26 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev
This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
IXGBE:
- Remove unnecessary macros and #ifdef's
- Disentangle unrelated flow API code paths
I40E:
- Get rid of global variables and unnecessary allocations
- Reduce code duplication around default RSS keys
- Use more appropriate integer types and definitions
IAVF:
- Remove dead code
- Remove unnecessary allocations
- Separate RSS uninit from hash flow parser uninit
ICE:
- Remove unnecessary allocations
This is done in preparation for further rework.
Note that IXGBE part of this patchset depends on MAC type check fixes [1].
[1] https://patches.dpdk.org/project/dpdk/patch/23738c8f043b1f65742744caa9b725d5474cb059.1770738711.git.anatoly.burakov@intel.com/
v1 -> v2:
- Added more cleanups around rte_malloc usage
Anatoly Burakov (26):
net/ixgbe: remove MAC type check macros
net/ixgbe: remove security-related ifdefery
net/ixgbe: split security and ntuple filters
net/i40e: get rid of global filter variables
net/i40e: make default RSS key global
net/i40e: use unsigned types for queue comparisons
net/i40e: use proper flex len define
net/i40e: remove global pattern variable
net/i40e: avoid rte malloc in tunnel set
net/i40e: avoid rte malloc in RSS RETA operations
net/i40e: avoid rte malloc in MAC/VLAN filtering
net/i40e: avoid rte malloc in VF resource queries
net/i40e: avoid rte malloc in adminq operations
net/i40e: avoid rte malloc in DDP package handling
net/i40e: avoid rte malloc in DDP ptype handling
net/iavf: remove remnants of pipeline mode
net/iavf: do not use malloc in crypto VF commands
net/iavf: decouple hash uninit from parser uninit
net/iavf: avoid rte malloc in RSS configuration
net/iavf: avoid rte malloc in MAC address operations
net/iavf: avoid rte malloc in IPsec operations
net/iavf: avoid rte malloc in queue operations
net/iavf: avoid rte malloc in irq map config
net/ice: avoid rte malloc in RSS RETA operations
net/ice: avoid rte malloc in MAC address operations
net/ice: avoid rte malloc in raw pattern parsing
drivers/net/intel/i40e/i40e_ethdev.c | 223 +++++++++----------
drivers/net/intel/i40e/i40e_ethdev.h | 24 +-
drivers/net/intel/i40e/i40e_flow.c | 146 ++++++------
drivers/net/intel/i40e/i40e_hash.c | 27 ++-
drivers/net/intel/i40e/i40e_hash.h | 3 +
drivers/net/intel/i40e/i40e_pf.c | 26 +--
drivers/net/intel/i40e/rte_pmd_i40e.c | 59 ++---
drivers/net/intel/iavf/iavf_ethdev.c | 8 +-
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 --
drivers/net/intel/iavf/iavf_hash.c | 14 +-
drivers/net/intel/iavf/iavf_hash.h | 13 ++
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 216 ++++++------------
drivers/net/intel/iavf/iavf_vchnl.c | 85 +++----
drivers/net/intel/ice/ice_dcf_ethdev.c | 8 +-
drivers/net/intel/ice/ice_ethdev.c | 8 +-
drivers/net/intel/ice/ice_fdir_filter.c | 14 +-
drivers/net/intel/ice/ice_hash.c | 10 +-
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 -
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 16 --
drivers/net/intel/ixgbe/ixgbe_flow.c | 162 +++++++++-----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 ---
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 -
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -
25 files changed, 496 insertions(+), 625 deletions(-)
create mode 100644 drivers/net/intel/iavf/iavf_hash.h
--
2.47.3
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v2 01/26] net/ixgbe: remove MAC type check macros
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 02/26] net/ixgbe: remove security-related ifdefery Anatoly Burakov
` (24 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The macros used were not informative and did not add any value beyond code
golf, so remove them and make MAC type checks explicit.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
2 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 5393c81363..55b121b15d 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -137,18 +137,6 @@
#define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
#define IXGBE_MAX_L2_TN_FILTER_NUM 128
-#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
- return -ENOTSUP;\
-} while (0)
-
-#define MAC_TYPE_FILTER_SUP(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
- (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
- (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
- return -ENOTSUP;\
-} while (0)
-
/* Link speed for X550 auto negotiation */
#define IXGBE_LINK_SPEED_X550_AUTONEG (IXGBE_LINK_SPEED_100_FULL | \
IXGBE_LINK_SPEED_1GB_FULL | \
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 27d2ba1132..a1245bb906 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -621,7 +621,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP_EXT(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540)
+ return -ENOTSUP;
ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
@@ -861,7 +863,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_ethertype_filter(attr, pattern,
actions, filter, error);
@@ -1150,7 +1158,13 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_syn_filter(attr, pattern,
actions, filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 02/26] net/ixgbe: remove security-related ifdefery
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 03/26] net/ixgbe: split security and ntuple filters Anatoly Burakov
` (23 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The security library is specified as explicit dependency for ixgbe, so
there is no more need to gate features behind #ifdef blocks that depend
on presence of this library.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 ------
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ---
drivers/net/intel/ixgbe/ixgbe_flow.c | 6 -----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --------------------
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 --
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -----
6 files changed, 52 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 11500a923c..57d929cf2c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -32,9 +32,7 @@
#include <rte_random.h>
#include <dev_driver.h>
#include <rte_hash_crc.h>
-#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
-#endif
#include <rte_os_shim.h>
#include "ixgbe_logs.h"
@@ -1177,11 +1175,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ixgbe_swfw_lock_reset(hw);
-#ifdef RTE_LIB_SECURITY
/* Initialize security_ctx only for primary process*/
if (ixgbe_ipsec_ctx_create(eth_dev))
return -ENOMEM;
-#endif
/* Initialize DCB configuration*/
memset(dcb_config, 0, sizeof(struct ixgbe_dcb_config));
@@ -1362,10 +1358,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
rte_free(eth_dev->data->hash_mac_addrs);
eth_dev->data->hash_mac_addrs = NULL;
err_exit:
-#ifdef RTE_LIB_SECURITY
rte_free(eth_dev->security_ctx);
eth_dev->security_ctx = NULL;
-#endif
return ret;
}
@@ -3148,10 +3142,8 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(dev);
-#ifdef RTE_LIB_SECURITY
rte_free(dev->security_ctx);
dev->security_ctx = NULL;
-#endif
return ret;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 55b121b15d..877fce697b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -14,9 +14,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
-#ifdef RTE_LIB_SECURITY
#include "ixgbe_ipsec.h"
-#endif
#include <rte_flow.h>
#include <rte_time.h>
#include <rte_hash.h>
@@ -478,9 +476,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-#ifdef RTE_LIB_SECURITY
struct ixgbe_ipsec ipsec;
-#endif
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index a1245bb906..e7521a4b1f 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,7 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
-#ifdef RTE_LIB_SECURITY
/**
* Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
*/
@@ -249,7 +248,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
item->type == RTE_FLOW_ITEM_TYPE_IPV6);
}
-#endif
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -630,11 +628,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (filter->proto == IPPROTO_ESP)
return 0;
-#endif
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
@@ -3074,11 +3070,9 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (ntuple_filter.proto == IPPROTO_ESP)
return flow;
-#endif
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
index 2857c19355..71deda9ed6 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
@@ -459,7 +459,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
-#ifdef RTE_LIB_SECURITY
if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
union ixgbe_crypto_tx_desc_md *md =
(union ixgbe_crypto_tx_desc_md *)mdata;
@@ -473,7 +472,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
tx_offload_mask.sa_idx |= ~0;
tx_offload_mask.sec_pad_len |= ~0;
}
-#endif
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -652,9 +650,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec;
-#endif
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -682,9 +678,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
-#ifdef RTE_LIB_SECURITY
use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
-#endif
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -696,7 +690,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
-#ifdef RTE_LIB_SECURITY
if (use_ipsec) {
union ixgbe_crypto_tx_desc_md *ipsec_mdata =
(union ixgbe_crypto_tx_desc_md *)
@@ -704,7 +697,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.sa_idx = ipsec_mdata->sa_idx;
tx_offload.sec_pad_len = ipsec_mdata->pad_len;
}
-#endif
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -896,10 +888,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
-#ifdef RTE_LIB_SECURITY
if (use_ipsec)
olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
-#endif
m_seg = tx_pkt;
do {
@@ -1523,13 +1513,11 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
}
-#ifdef RTE_LIB_SECURITY
if (rx_status & IXGBE_RXD_STAT_SECP) {
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
}
-#endif
return pkt_flags;
}
@@ -2490,9 +2478,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST) {
if (txq->tx_rs_thresh <= IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
@@ -2648,9 +2634,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
(txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = rte_eth_tx_pkt_prepare_dummy;
@@ -2711,10 +2695,8 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
-#endif
return tx_offload_capa;
}
@@ -2884,10 +2866,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
-#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
RTE_ETH_TX_OFFLOAD_SECURITY);
-#endif
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -3171,10 +3151,8 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
-#endif
return offloads;
}
@@ -5102,10 +5080,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ci_rx_queue *rxq = dev->data->rx_queues[i];
rxq->vector_rx = rx_using_sse;
-#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY);
-#endif
}
}
@@ -5611,7 +5587,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
ixgbe_setup_loopback_link_x540_x550(hw, true);
}
-#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
@@ -5624,7 +5599,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
return ret;
}
}
-#endif
return 0;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.h b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
index 7950e56ee4..33023a3580 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
@@ -99,11 +99,9 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
-#ifdef RTE_LIB_SECURITY
/* inline ipsec related*/
uint64_t sa_idx:8; /**< TX SA database entry index */
uint64_t sec_pad_len:4; /**< padding length */
-#endif
};
};
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
index dca3a20ca0..3f37038e5c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -21,7 +21,6 @@ ixgbe_rxq_rearm(struct ci_rx_queue *rxq)
ci_rxq_rearm(rxq, CI_RX_VEC_LEVEL_SSE);
}
-#ifdef RTE_LIB_SECURITY
static inline void
desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
@@ -56,7 +55,6 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
*rearm2 = _mm_extract_epi32(rearm, 2);
*rearm3 = _mm_extract_epi32(rearm, 3);
}
-#endif
static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
@@ -265,9 +263,7 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ci_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec = rxq->using_ipsec;
-#endif
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -444,10 +440,8 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
desc_to_olflags_v(descs, mbuf_init, vlan_flags, udp_p_flag,
&rx_pkts[pos]);
-#ifdef RTE_LIB_SECURITY
if (unlikely(use_ipsec))
desc_to_olflags_v_ipsec(descs, &rx_pkts[pos]);
-#endif
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 03/26] net/ixgbe: split security and ntuple filters
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 02/26] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 04/26] net/i40e: get rid of global filter variables Anatoly Burakov
` (22 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
These filters are mashed together even though they almost do not share any
code at all between each other. Separate security filter from ntuple filter
and parse it separately.
While we're at it, we're making checks more stringent (such as checking for
NULL conf), and more type safe.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 136 ++++++++++++++++++---------
1 file changed, 91 insertions(+), 45 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index e7521a4b1f..fee5d7b901 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,41 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- act = next_no_void_action(actions, NULL);
- if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- const void *conf = act->conf;
- /* check if the next not void item is END */
- act = next_no_void_action(actions, act);
- if (act->type != RTE_FLOW_ACTION_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "Not supported action.");
- return -rte_errno;
- }
-
- /* get the IP pattern*/
- item = next_no_void_pattern(pattern, NULL);
- while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- if (item->last ||
- item->type == RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "IP pattern missing.");
- return -rte_errno;
- }
- item = next_no_void_pattern(pattern, item);
- }
-
- filter->proto = IPPROTO_ESP;
- return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
- item->type == RTE_FLOW_ITEM_TYPE_IPV6);
- }
-
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -607,6 +572,81 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return 0;
}
+static int __rte_unused
+ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct rte_flow_action_security *security;
+ const struct rte_flow_item *item;
+ const struct rte_flow_action *act;
+
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error,
+ EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "NULL pattern.");
+ return -rte_errno;
+ }
+ if (actions == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "NULL action.");
+ return -rte_errno;
+ }
+ if (attr == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR,
+ NULL, "NULL attribute.");
+ return -rte_errno;
+ }
+
+ /* check if next non-void action is security */
+ act = next_no_void_action(actions, NULL);
+ if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+ security = act->conf;
+ if (security == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "NULL security action config.");
+ }
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+
+ return ixgbe_crypto_add_ingress_sa_from_flow(security->security_session,
+ item->spec, item->type == RTE_FLOW_ITEM_TYPE_IPV6);
+}
+
/* a specific function for ixgbe because the flags is specific */
static int
ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
@@ -628,10 +668,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- /* ESP flow not really a flow*/
- if (filter->proto == IPPROTO_ESP)
- return 0;
-
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -3066,14 +3102,17 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
-
- /* ESP flow not really a flow*/
- if (ntuple_filter.proto == IPPROTO_ESP)
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
return flow;
+ memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
+ actions, &ntuple_filter, error);
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
@@ -3297,6 +3336,13 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
+ return 0;
+
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 04/26] net/i40e: get rid of global filter variables
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (2 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 03/26] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 05/26] net/i40e: make default RSS key global Anatoly Burakov
` (21 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, i40e driver relies on global state to work around the fact that
`rte_flow_validate()` is being called directly from `rte_flow_create()`,
and it not being possible to pass state between two functions. Fix that by
making a small wrapper around validation that will create a dummy context.
Additionally, tunnel filter doesn't appear to be used by anything and so is
omitted from the structure.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 16 ++--
drivers/net/intel/i40e/i40e_flow.c | 117 ++++++++++++++-------------
2 files changed, 68 insertions(+), 65 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index cab6d7e9dc..0de036f2d9 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1304,12 +1304,14 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
-union i40e_filter_t {
- struct rte_eth_ethertype_filter ethertype_filter;
- struct i40e_fdir_filter_conf fdir_filter;
- struct rte_eth_tunnel_filter_conf tunnel_filter;
- struct i40e_tunnel_filter_conf consistent_tunnel_filter;
- struct i40e_rte_flow_rss_conf rss_conf;
+struct i40e_filter_ctx {
+ union {
+ struct rte_eth_ethertype_filter ethertype_filter;
+ struct i40e_fdir_filter_conf fdir_filter;
+ struct i40e_tunnel_filter_conf consistent_tunnel_filter;
+ struct i40e_rte_flow_rss_conf rss_conf;
+ };
+ enum rte_filter_type type;
};
typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
@@ -1317,7 +1319,7 @@ typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
struct i40e_valid_pattern {
enum rte_flow_item_type *items;
parse_filter_t parse_filter;
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2374b9bbca..e611de0c06 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -80,37 +80,37 @@ static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
struct i40e_ethertype_filter *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
@@ -124,7 +124,7 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int
i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
@@ -136,7 +136,7 @@ static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
const struct rte_flow_ops i40e_flow_ops = {
.validate = i40e_flow_validate,
.create = i40e_flow_create,
@@ -145,8 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-static union i40e_filter_t cons_filter;
-static enum rte_filter_type cons_filter_type = RTE_ETH_FILTER_NONE;
/* internal pattern w/o VOID items */
struct rte_flow_item g_items[32];
@@ -1454,10 +1452,9 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct rte_eth_ethertype_filter *ethertype_filter =
- &filter->ethertype_filter;
+ struct rte_eth_ethertype_filter *ethertype_filter = &filter->ethertype_filter;
int ret;
ret = i40e_flow_parse_ethertype_pattern(dev, pattern, error,
@@ -1474,7 +1471,7 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_ETHERTYPE;
+ filter->type = RTE_ETH_FILTER_ETHERTYPE;
return ret;
}
@@ -2549,7 +2546,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
struct i40e_fdir_filter_conf *fdir_filter = &filter->fdir_filter;
int ret;
@@ -2566,7 +2563,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_FDIR;
+ filter->type = RTE_ETH_FILTER_FDIR;
return 0;
}
@@ -2834,10 +2831,9 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_l4_pattern(pattern, error, tunnel_filter);
@@ -2852,7 +2848,7 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3086,10 +3082,9 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_vxlan_pattern(dev, pattern,
@@ -3105,7 +3100,7 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3338,10 +3333,9 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_nvgre_pattern(dev, pattern,
@@ -3357,7 +3351,7 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3495,10 +3489,9 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_mpls_pattern(dev, pattern,
@@ -3514,7 +3507,7 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3648,10 +3641,9 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_gtp_pattern(dev, pattern,
@@ -3667,7 +3659,7 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3766,10 +3758,9 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_qinq_pattern(dev, pattern,
@@ -3785,16 +3776,17 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
static int
-i40e_flow_validate(struct rte_eth_dev *dev,
+i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
+ struct i40e_filter_ctx *filter_ctx,
struct rte_flow_error *error)
{
struct rte_flow_item *items; /* internal pattern w/o VOID items */
@@ -3823,7 +3815,6 @@ i40e_flow_validate(struct rte_eth_dev *dev,
NULL, "NULL attribute.");
return -rte_errno;
}
- memset(&cons_filter, 0, sizeof(cons_filter));
/* Get the non-void item of action */
while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID)
@@ -3834,9 +3825,8 @@ i40e_flow_validate(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_HASH;
- return i40e_hash_parse(dev, pattern, actions + i,
- &cons_filter.rss_conf, error);
+ filter_ctx->type = RTE_ETH_FILTER_HASH;
+ return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error);
}
i = 0;
@@ -3878,8 +3868,7 @@ i40e_flow_validate(struct rte_eth_dev *dev,
}
if (parse_filter)
- ret = parse_filter(dev, attr, items, actions,
- error, &cons_filter);
+ ret = parse_filter(dev, attr, items, actions, error, filter_ctx);
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
@@ -3890,6 +3879,19 @@ i40e_flow_validate(struct rte_eth_dev *dev,
return ret;
}
+static int
+i40e_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ /* creates dummy context */
+ struct i40e_filter_ctx filter_ctx = {0};
+
+ return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
+}
+
static struct rte_flow *
i40e_flow_create(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -3898,15 +3900,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_filter_ctx filter_ctx = {0};
struct rte_flow *flow = NULL;
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
- ret = i40e_flow_validate(dev, attr, pattern, actions, error);
+ ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
if (ret < 0)
return NULL;
- if (cons_filter_type == RTE_ETH_FILTER_FDIR) {
+ if (filter_ctx.type == RTE_ETH_FILTER_FDIR) {
/* if this is the first time we're creating an fdir flow */
if (pf->fdir.fdir_vsi == NULL) {
ret = i40e_fdir_setup(pf);
@@ -3947,18 +3950,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
}
- switch (cons_filter_type) {
+ switch (filter_ctx.type) {
case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_set(pf,
- &cons_filter.ethertype_filter, 1);
+ ret = i40e_ethertype_filter_set(pf, &filter_ctx.ethertype_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->ethertype.ethertype_list,
i40e_ethertype_filter_list);
break;
case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev,
- &cons_filter.fdir_filter, 1);
+ ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
@@ -3966,14 +3967,14 @@ i40e_flow_create(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_dev_consistent_tunnel_filter_set(pf,
- &cons_filter.consistent_tunnel_filter, 1);
+ &filter_ctx.consistent_tunnel_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->tunnel.tunnel_list,
i40e_tunnel_filter_list);
break;
case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_create(pf, &cons_filter.rss_conf);
+ ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->rss_config_list,
@@ -3983,7 +3984,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
goto free_flow;
}
- flow->filter_type = cons_filter_type;
+ flow->filter_type = filter_ctx.type;
TAILQ_INSERT_TAIL(&pf->flow_list, flow, node);
return flow;
@@ -3992,7 +3993,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
"Failed to create flow.");
- if (cons_filter_type != RTE_ETH_FILTER_FDIR)
+ if (filter_ctx.type != RTE_ETH_FILTER_FDIR)
rte_free(flow);
else
i40e_fdir_entry_pool_put(fdir_info, flow);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 05/26] net/i40e: make default RSS key global
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (3 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 04/26] net/i40e: get rid of global filter variables Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 06/26] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
` (20 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, there are multiple places where we need a default RSS key, but
in each of those places we define it as a local variable. Make it global
constant, and adjust all callers to use the global constant. When dealing
with adminq, we cannot send down the constant because adminq commands do
not guarantee const-ness, so copy RSS key into a local buffer before
sending it down to hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 22 ++++++++++------------
drivers/net/intel/i40e/i40e_hash.c | 23 +++++++++++++++++------
drivers/net/intel/i40e/i40e_hash.h | 3 +++
3 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index c8153f3351..2deb87b01b 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9082,23 +9082,21 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
int
i40e_pf_reset_rss_key(struct i40e_pf *pf)
{
- const uint8_t key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- uint8_t *rss_key;
+ uint8_t key_buf[I40E_RSS_KEY_LEN];
+ const uint8_t *rss_key;
/* Reset key */
rss_key = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key;
- if (!rss_key ||
- pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < key_len) {
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+ if (!rss_key || pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < sizeof(key_buf))
+ rss_key = i40e_rss_key_default;
- rss_key = (uint8_t *)rss_key_default;
- }
+ /*
+ * adminq does not guarantee const-ness of RSS key once a command is sent down, so make a
+ * local copy.
+ */
+ memcpy(&key_buf, rss_key, sizeof(key_buf));
- return i40e_set_rss_key(pf->main_vsi, rss_key, key_len);
+ return i40e_set_rss_key(pf->main_vsi, key_buf, sizeof(key_buf));
}
static int
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index 3149682197..f20b40e7d0 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -233,6 +233,22 @@ struct i40e_hash_match_pattern {
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+const uint8_t i40e_rss_key_default[] = {
+ 0x44, 0x39, 0x79, 0x6b,
+ 0xb5, 0x4c, 0x50, 0x23,
+ 0xb6, 0x75, 0xea, 0x5b,
+ 0x12, 0x4f, 0x9f, 0x30,
+ 0xb8, 0xa2, 0xc0, 0x3d,
+ 0xdf, 0xdc, 0x4d, 0x02,
+ 0xa0, 0x8c, 0x9b, 0x33,
+ 0x4a, 0xf6, 0x4a, 0x4c,
+ 0x05, 0xc6, 0xfa, 0x34,
+ 0x39, 0x58, 0xd8, 0x55,
+ 0x7d, 0x99, 0x58, 0x3a,
+ 0xe1, 0x38, 0xc9, 0x2e,
+ 0x81, 0x15, 0x03, 0x66
+};
+
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
*/
@@ -910,17 +926,12 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act,
const uint8_t *key = rss_act->key;
if (!key || rss_act->key_len != sizeof(rss_conf->key)) {
- const uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
-
if (rss_act->key_len != sizeof(rss_conf->key))
PMD_DRV_LOG(WARNING,
"RSS key length invalid, must be %u bytes, now set key to default",
(uint32_t)sizeof(rss_conf->key));
- memcpy(rss_conf->key, rss_key_default, sizeof(rss_conf->key));
+ memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key));
} else {
memcpy(rss_conf->key, key, sizeof(rss_conf->key));
}
diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h
index ff8c91c030..2513d84565 100644
--- a/drivers/net/intel/i40e/i40e_hash.h
+++ b/drivers/net/intel/i40e/i40e_hash.h
@@ -27,6 +27,9 @@ int i40e_hash_filter_destroy(struct i40e_pf *pf,
const struct i40e_rss_filter *rss_filter);
int i40e_hash_filter_flush(struct i40e_pf *pf);
+#define I40E_RSS_KEY_LEN ((I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t))
+extern const uint8_t i40e_rss_key_default[I40E_RSS_KEY_LEN];
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 06/26] net/i40e: use unsigned types for queue comparisons
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (4 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 05/26] net/i40e: make default RSS key global Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 07/26] net/i40e: use proper flex len define Anatoly Burakov
` (19 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when we compare queue numbers against maximum traffic class
value of 64, we do not use unsigned values, which results in compiler
warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
value. Make it unsigned, and adjust callers to use correct types. As a
consequence, `i40e_align_floor` now returns unsigned value as well - this
is correct, because nothing about that function implies signed usage being
a valid use case.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 2 +-
drivers/net/intel/i40e/i40e_ethdev.h | 6 +++---
drivers/net/intel/i40e/i40e_hash.c | 4 ++--
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 2deb87b01b..d5c61cd577 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9058,7 +9058,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
struct i40e_hw *hw = &pf->adapter->hw;
uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
- int num;
+ size_t num;
/* If both VMDQ and RSS enabled, not all of PF queues are
* configured. It's necessary to calculate the actual PF
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 0de036f2d9..d144297360 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -24,7 +24,7 @@
#define I40E_AQ_LEN 32
#define I40E_AQ_BUF_SZ 4096
/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
-#define I40E_MAX_Q_PER_TC 64
+#define I40E_MAX_Q_PER_TC 64U
#define I40E_NUM_DESC_DEFAULT 512
#define I40E_NUM_DESC_ALIGN 32
#define I40E_BUF_SIZE_MIN 1024
@@ -1517,8 +1517,8 @@ i40e_init_adminq_parameter(struct i40e_hw *hw)
hw->aq.asq_buf_size = I40E_AQ_BUF_SZ;
}
-static inline int
-i40e_align_floor(int n)
+static inline uint32_t
+i40e_align_floor(uint32_t n)
{
if (n == 0)
return 0;
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index f20b40e7d0..cbb377295d 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -949,7 +949,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
struct i40e_pf *pf;
struct i40e_hw *hw;
uint16_t i;
- int max_queue;
+ size_t max_queue;
hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (!rss_act->queue_num ||
@@ -971,7 +971,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
for (i = 0; i < rss_act->queue_num; i++) {
- if ((int)rss_act->queue[i] >= max_queue)
+ if (rss_act->queue[i] >= max_queue)
break;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 07/26] net/i40e: use proper flex len define
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (5 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 06/26] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 08/26] net/i40e: remove global pattern variable Anatoly Burakov
` (18 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
For FDIR, we have byte arrays that are supposed to be limited to whatever
the HW supports in terms of flex descriptor matching. However, in the
structure definition, spec and mask bytes are using different array length
defines, and the only reason why this works is because they evaluate to the
same value.
Use the i40e-specific definition instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index d144297360..025901edb6 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -631,7 +631,7 @@ struct i40e_fdir_flex_pit {
/* A structure used to contain extend input of flow */
struct i40e_fdir_flow_ext {
uint16_t vlan_tci;
- uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
+ uint8_t flexbytes[I40E_FDIR_MAX_FLEX_LEN];
/* It is filled by the flexible payload to match. */
uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN];
uint8_t raw_id;
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 08/26] net/i40e: remove global pattern variable
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (6 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 07/26] net/i40e: use proper flex len define Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 09/26] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
` (17 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
When parsing flow patterns, current code cleans up the pattern list by
removing void flow items, and copies the patterns into an array. That
array, when dealing with under 32 flow items, is allocated on the stack,
but when the pattern is big enough, a new list is dynamically allocated.
This allocated list is a global variable, and is allocated using
rte_zmalloc call which seems like overkill for this use case.
Remove the global variable, and replace the split behavior with
unconditional allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 29 +++++++++--------------------
1 file changed, 9 insertions(+), 20 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index e611de0c06..c5bb787f28 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -145,9 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-/* internal pattern w/o VOID items */
-struct rte_flow_item g_items[32];
-
/* Pattern matched ethertype filter */
static enum rte_flow_item_type pattern_ethertype[] = {
RTE_FLOW_ITEM_TYPE_ETH,
@@ -3837,19 +3834,13 @@ i40e_flow_check(struct rte_eth_dev *dev,
i++;
}
item_num++;
-
- if (item_num <= ARRAY_SIZE(g_items)) {
- items = g_items;
- } else {
- items = rte_zmalloc("i40e_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
- if (!items) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL,
- "No memory for PMD internal items.");
- return -ENOMEM;
- }
+ items = calloc(item_num, sizeof(struct rte_flow_item));
+ if (items == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL,
+ "No memory for PMD internal items.");
+ return -ENOMEM;
}
i40e_pattern_skip_void_item(items, pattern);
@@ -3862,8 +3853,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- if (items != g_items)
- rte_free(items);
+ free(items);
return -rte_errno;
}
@@ -3873,8 +3863,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
- if (items != g_items)
- rte_free(items);
+ free(items);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 09/26] net/i40e: avoid rte malloc in tunnel set
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (7 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 08/26] net/i40e: remove global pattern variable Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 10/26] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (16 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when setting tunnel configuration, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as this memory is
not being stored anywhere, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 124 ++++++++++++---------------
1 file changed, 53 insertions(+), 71 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index d5c61cd577..06430e6319 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8511,38 +8511,27 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
struct i40e_pf_vf *vf = NULL;
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_vsi *vsi;
- struct i40e_aqc_cloud_filters_element_bb *cld_filter;
- struct i40e_aqc_cloud_filters_element_bb *pfilter;
+ struct i40e_aqc_cloud_filters_element_bb cld_filter = {0};
struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *tunnel, *node;
+ struct i40e_tunnel_filter *node;
struct i40e_tunnel_filter check_filter; /* Check if filter exists */
uint32_t teid_le;
bool big_buffer = 0;
- cld_filter = rte_zmalloc("tunnel_filter",
- sizeof(struct i40e_aqc_add_rm_cloud_filt_elem_ext),
- 0);
-
- if (cld_filter == NULL) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- return -ENOMEM;
- }
- pfilter = cld_filter;
-
rte_ether_addr_copy(&tunnel_filter->outer_mac,
- (struct rte_ether_addr *)&pfilter->element.outer_mac);
+ (struct rte_ether_addr *)&cld_filter.element.outer_mac);
rte_ether_addr_copy(&tunnel_filter->inner_mac,
- (struct rte_ether_addr *)&pfilter->element.inner_mac);
+ (struct rte_ether_addr *)&cld_filter.element.inner_mac);
- pfilter->element.inner_vlan =
+ cld_filter.element.inner_vlan =
rte_cpu_to_le_16(tunnel_filter->inner_vlan);
if (tunnel_filter->ip_type == I40E_TUNNEL_IPTYPE_IPV4) {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v4.data,
&ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
+ sizeof(cld_filter.element.ipaddr.v4.data));
} else {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
for (i = 0; i < 4; i++) {
@@ -8550,9 +8539,9 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
rte_cpu_to_le_32(rte_be_to_cpu_32(
tunnel_filter->ip_addr.ipv6_addr[i]));
}
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v6.data,
&convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
+ sizeof(cld_filter.element.ipaddr.v6.data));
}
/* check tunneled type */
@@ -8573,11 +8562,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x40;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOUDP;
@@ -8589,11 +8578,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x0;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOGRE;
@@ -8605,11 +8594,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8620,11 +8609,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8641,8 +8630,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
* Big Buffer should be set, see changes in
* i40e_aq_add_cloud_filters
*/
- pfilter->general_fields[0] = tunnel_filter->inner_vlan;
- pfilter->general_fields[1] = tunnel_filter->outer_vlan;
+ cld_filter.general_fields[0] = tunnel_filter->inner_vlan;
+ cld_filter.general_fields[1] = tunnel_filter->outer_vlan;
big_buffer = 1;
break;
case I40E_CLOUD_TYPE_UDP:
@@ -8657,20 +8646,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->sport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
} else {
@@ -8682,20 +8671,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->dport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
}
@@ -8704,48 +8693,46 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
default:
/* Other tunnel types is not supported. */
PMD_DRV_LOG(ERR, "tunnel type is not supported.");
- rte_free(cld_filter);
return -EINVAL;
}
if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoUDP)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoGRE)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPC)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPU)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_QINQ)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_SCTP) {
if (tunnel_filter->l4_port_type == I40E_L4_PORT_TYPE_SRC)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
} else {
val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
- &pfilter->element.flags);
+ &cld_filter.element.flags);
if (val < 0) {
- rte_free(cld_filter);
return -EINVAL;
}
}
- pfilter->element.flags |= rte_cpu_to_le_16(
- I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE |
- ip_type | (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
- pfilter->element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->element.queue_number =
+ cld_filter.element.flags |=
+ rte_cpu_to_le_16(I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+ (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
+ cld_filter.element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
+ cld_filter.element.queue_number =
rte_cpu_to_le_16(tunnel_filter->queue_id);
if (!tunnel_filter->is_to_vf)
@@ -8753,7 +8740,6 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
else {
if (tunnel_filter->vf_id >= pf->vf_num) {
PMD_DRV_LOG(ERR, "Invalid argument.");
- rte_free(cld_filter);
return -EINVAL;
}
vf = &pf->vfs[tunnel_filter->vf_id];
@@ -8762,38 +8748,36 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
/* Check if there is the filter in SW list */
memset(&check_filter, 0, sizeof(check_filter));
- i40e_tunnel_filter_convert(cld_filter, &check_filter);
+ i40e_tunnel_filter_convert(&cld_filter, &check_filter);
check_filter.is_to_vf = tunnel_filter->is_to_vf;
check_filter.vf_id = tunnel_filter->vf_id;
node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &check_filter.input);
if (add && node) {
PMD_DRV_LOG(ERR, "Conflict with existing tunnel rules!");
- rte_free(cld_filter);
return -EINVAL;
}
if (!add && !node) {
PMD_DRV_LOG(ERR, "There's no corresponding tunnel filter!");
- rte_free(cld_filter);
return -EINVAL;
}
if (add) {
+ struct i40e_tunnel_filter *tunnel;
+
if (big_buffer)
ret = i40e_aq_add_cloud_filters_bb(hw,
- vsi->seid, cld_filter, 1);
+ vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_add_cloud_filters(hw,
- vsi->seid, &cld_filter->element, 1);
+ vsi->seid, &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to add a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
tunnel = rte_zmalloc("tunnel_filter", sizeof(*tunnel), 0);
if (tunnel == NULL) {
PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- rte_free(cld_filter);
return -ENOMEM;
}
@@ -8804,19 +8788,17 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
} else {
if (big_buffer)
ret = i40e_aq_rem_cloud_filters_bb(
- hw, vsi->seid, cld_filter, 1);
+ hw, vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter->element, 1);
+ &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to delete a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
ret = i40e_sw_tunnel_filter_del(pf, &node->input);
}
- rte_free(cld_filter);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 10/26] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (8 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 09/26] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
` (15 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 06430e6319..654b0e5d16 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4630,7 +4630,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -4649,7 +4649,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
pf->adapter->rss_reta_updated = 1;
out:
- rte_free(lut);
+ free(lut);
return ret;
}
@@ -4673,7 +4673,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -4690,7 +4690,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (9 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 10/26] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 12/26] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
` (14 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding, removing, or configuring MAC and VLAN filters,
we are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 38 +++++++++++++--------------
drivers/net/intel/i40e/rte_pmd_i40e.c | 16 +++++------
2 files changed, 26 insertions(+), 28 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 654b0e5d16..806c29368c 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4165,8 +4165,7 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
i = 0;
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
+ mac_filter = calloc(num, sizeof(*mac_filter));
if (mac_filter == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -4206,7 +4205,7 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (ret)
PMD_DRV_LOG(ERR, "i40e vsi add mac fail.");
}
- rte_free(mac_filter);
+ free(mac_filter);
}
if (mask & RTE_ETH_QINQ_STRIP_MASK) {
@@ -6231,8 +6230,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
+ mac_filter = calloc(num, sizeof(*mac_filter));
if (mac_filter == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -6264,7 +6262,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
}
DONE:
- rte_free(mac_filter);
+ free(mac_filter);
return ret;
}
@@ -7154,7 +7152,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0);
+ req_list = calloc(1, ele_buff_size);
if (req_list == NULL) {
PMD_DRV_LOG(ERR, "Fail to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7207,7 +7205,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
} while (num < total);
DONE:
- rte_free(req_list);
+ free(req_list);
return ret;
}
@@ -7230,7 +7228,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0);
+ req_list = calloc(1, ele_buff_size);
if (req_list == NULL) {
PMD_DRV_LOG(ERR, "Fail to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7286,7 +7284,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
} while (num < total);
DONE:
- rte_free(req_list);
+ free(req_list);
return ret;
}
@@ -7455,7 +7453,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
else
num = vsi->mac_num * vsi->vlan_num;
- mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0);
+ mv_f = calloc(num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7484,7 +7482,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
ret = i40e_remove_macvlan_filters(vsi, mv_f, num);
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7510,7 +7508,7 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
return I40E_ERR_PARAM;
}
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
+ mv_f = calloc(mac_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
@@ -7532,7 +7530,7 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
vsi->vlan_num++;
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7561,7 +7559,7 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
return I40E_ERR_PARAM;
}
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
+ mv_f = calloc(mac_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
@@ -7594,7 +7592,7 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
vsi->vlan_num--;
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7626,7 +7624,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
mac_filter->filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7665,7 +7663,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7696,7 +7694,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7725,7 +7723,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index a358f68bc5..fb73fa924f 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -233,7 +233,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -250,18 +250,18 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
@@ -294,7 +294,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
f->mac_info.filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -312,18 +312,18 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 12/26] net/i40e: avoid rte malloc in VF resource queries
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (10 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 13/26] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
` (13 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when responding to VF resource queries, we are dynamically
allocating a temporary buffer with rte_zmalloc followed by an immediate
rte_free. This is not needed as the response is only used temporarily,
so replace it with a stack-allocated structure (the allocation is fixed
in size and pretty small).
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_pf.c | 26 +++++++-------------------
1 file changed, 7 insertions(+), 19 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_pf.c b/drivers/net/intel/i40e/i40e_pf.c
index ebe1deeade..2a5637b0c1 100644
--- a/drivers/net/intel/i40e/i40e_pf.c
+++ b/drivers/net/intel/i40e/i40e_pf.c
@@ -309,9 +309,14 @@ static int
i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
bool b_op)
{
- struct virtchnl_vf_resource *vf_res = NULL;
+ /* only have 1 VSI by default */
+ struct {
+ struct virtchnl_vf_resource vf_res;
+ struct virtchnl_vsi_resource vsi_res;
+ } res = {0};
+ struct virtchnl_vf_resource *vf_res = &res.vf_res;
+ uint32_t len = sizeof(res);
struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
- uint32_t len = 0;
uint64_t default_hena = I40E_RSS_HENA_ALL;
int ret = I40E_SUCCESS;
@@ -322,20 +327,6 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
return ret;
}
- /* only have 1 VSI by default */
- len = sizeof(struct virtchnl_vf_resource) +
- I40E_DEFAULT_VF_VSI_NUM *
- sizeof(struct virtchnl_vsi_resource);
-
- vf_res = rte_zmalloc("i40e_vf_res", len, 0);
- if (vf_res == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate mem");
- ret = I40E_ERR_NO_MEMORY;
- vf_res = NULL;
- len = 0;
- goto send_msg;
- }
-
if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
VIRTCHNL_VF_OFFLOAD_VLAN;
@@ -377,11 +368,8 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
rte_ether_addr_copy(&vf->mac_addr,
(struct rte_ether_addr *)vf_res->vsi_res[0].default_mac_addr);
-send_msg:
i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_GET_VF_RESOURCES,
ret, (uint8_t *)vf_res, len);
- rte_free(vf_res);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 13/26] net/i40e: avoid rte malloc in adminq operations
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (11 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 12/26] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 14/26] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
` (12 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing admin queue messages, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as the message
buffer is only used temporarily within the function scope, so replace it
with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 806c29368c..2e0c2e2482 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6896,7 +6896,7 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
int ret;
info.buf_len = I40E_AQ_BUF_SZ;
- info.msg_buf = rte_zmalloc("msg_buffer", info.buf_len, 0);
+ info.msg_buf = calloc(1, info.buf_len);
if (!info.msg_buf) {
PMD_DRV_LOG(ERR, "Failed to allocate mem");
return;
@@ -6936,7 +6936,7 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
break;
}
}
- rte_free(info.msg_buf);
+ free(info.msg_buf);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 14/26] net/i40e: avoid rte malloc in DDP package handling
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (12 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 13/26] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 15/26] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
` (11 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing Dynamic Driver Profile (DDP) packages and
checking profile information, we are using rte_zmalloc followed by
immediate rte_free. This is not needed as these buffers are only used
temporarily, so replace it with stack-allocated structures or regular
malloc/free where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/rte_pmd_i40e.c | 43 +++++++++------------------
1 file changed, 14 insertions(+), 29 deletions(-)
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index fb73fa924f..a2e24b5ea2 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -1569,9 +1569,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
PMD_DRV_LOG(INFO, "Read-only profile.");
return 0;
}
- buff = rte_zmalloc("pinfo_list",
- (I40E_PROFILE_INFO_SIZE * I40E_MAX_PROFILE_NUM + 4),
- 0);
+ buff = calloc(I40E_MAX_PROFILE_NUM + 4, I40E_PROFILE_INFO_SIZE);
if (!buff) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return -1;
@@ -1583,7 +1581,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
0, NULL);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get profile info list.");
- rte_free(buff);
+ free(buff);
return -1;
}
p_list = (struct rte_pmd_i40e_profile_list *)buff;
@@ -1591,20 +1589,20 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
p = &p_list->p_info[i];
if (pinfo->track_id == p->track_id) {
PMD_DRV_LOG(INFO, "Profile exists.");
- rte_free(buff);
+ free(buff);
return 1;
}
}
/* profile with group id 0xff is compatible with any other profile */
if ((pinfo->track_id & group_mask) == group_mask) {
- rte_free(buff);
+ free(buff);
return 0;
}
for (i = 0; i < p_list->p_count; i++) {
p = &p_list->p_info[i];
if ((p->track_id & group_mask) == 0) {
PMD_DRV_LOG(INFO, "Profile of the group 0 exists.");
- rte_free(buff);
+ free(buff);
return 2;
}
}
@@ -1615,12 +1613,12 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
if ((pinfo->track_id & group_mask) !=
(p->track_id & group_mask)) {
PMD_DRV_LOG(INFO, "Profile of different group exists.");
- rte_free(buff);
+ free(buff);
return 3;
}
}
- rte_free(buff);
+ free(buff);
return 0;
}
@@ -1636,7 +1634,10 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
struct i40e_generic_seg_header *profile_seg_hdr;
struct i40e_generic_seg_header *metadata_seg_hdr;
uint32_t track_id;
- uint8_t *profile_info_sec;
+ struct {
+ struct i40e_profile_section_header sec;
+ struct i40e_profile_info info;
+ } profile_info_sec = {0};
int is_exist;
enum i40e_status_code status = I40E_SUCCESS;
static const uint32_t type_mask = 0xff000000;
@@ -1701,26 +1702,15 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
return -EINVAL;
}
- profile_info_sec = rte_zmalloc(
- "i40e_profile_info",
- sizeof(struct i40e_profile_section_header) +
- sizeof(struct i40e_profile_info),
- 0);
- if (!profile_info_sec) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- return -EINVAL;
- }
-
/* Check if the profile already loaded */
i40e_generate_profile_info_sec(
((struct i40e_profile_segment *)profile_seg_hdr)->name,
&((struct i40e_profile_segment *)profile_seg_hdr)->version,
- track_id, profile_info_sec,
+ track_id, (uint8_t* )&profile_info_sec,
op == RTE_PMD_I40E_PKG_OP_WR_ADD);
- is_exist = i40e_check_profile_info(port, profile_info_sec);
+ is_exist = i40e_check_profile_info(port, (uint8_t* )&profile_info_sec);
if (is_exist < 0) {
PMD_DRV_LOG(ERR, "Failed to check profile.");
- rte_free(profile_info_sec);
return -EINVAL;
}
@@ -1733,13 +1723,11 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
else if (is_exist == 3)
PMD_DRV_LOG(ERR, "Profile of different group already exists");
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return -EEXIST;
}
} else if (op == RTE_PMD_I40E_PKG_OP_WR_DEL) {
if (is_exist != 1) {
PMD_DRV_LOG(ERR, "Profile does not exist.");
- rte_free(profile_info_sec);
return -EACCES;
}
}
@@ -1751,7 +1739,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
track_id);
if (status) {
PMD_DRV_LOG(ERR, "Failed to write profile for delete.");
- rte_free(profile_info_sec);
return status;
}
} else {
@@ -1764,14 +1751,13 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
PMD_DRV_LOG(ERR, "Failed to write profile for add.");
else
PMD_DRV_LOG(ERR, "Failed to write profile.");
- rte_free(profile_info_sec);
return status;
}
}
if (track_id && (op != RTE_PMD_I40E_PKG_OP_WR_ONLY)) {
/* Modify loaded profiles info list */
- status = i40e_add_rm_profile_info(hw, profile_info_sec);
+ status = i40e_add_rm_profile_info(hw, (uint8_t* )&profile_info_sec);
if (status) {
if (op == RTE_PMD_I40E_PKG_OP_WR_ADD)
PMD_DRV_LOG(ERR, "Failed to add profile to info list.");
@@ -1784,7 +1770,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
op == RTE_PMD_I40E_PKG_OP_WR_DEL)
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return status;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 15/26] net/i40e: avoid rte malloc in DDP ptype handling
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (13 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 14/26] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 16/26] net/iavf: remove remnants of pipeline mode Anatoly Burakov
` (10 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating customized protocol and packet type information
via DDP packages, we are using rte_zmalloc followed by immediate rte_free.
This is not needed as these buffers are only used temporarily within
the function scope, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 25 ++++++++++++-------------
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 2e0c2e2482..f27fbf89ee 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -11787,7 +11787,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = pctype_num * sizeof(struct rte_pmd_i40e_proto_info);
- pctype = rte_zmalloc("new_pctype", buff_size, 0);
+ pctype = calloc(pctype_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!pctype) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return -1;
@@ -11798,7 +11798,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PCTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get pctype list");
- rte_free(pctype);
+ free(pctype);
return -1;
}
@@ -11879,7 +11879,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
}
- rte_free(pctype);
+ free(pctype);
return 0;
}
@@ -11925,7 +11925,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_info);
- ptype = rte_zmalloc("new_ptype", buff_size, 0);
+ ptype = calloc(ptype_num, sizeof(struct rte_pmd_i40e_ptype_info));
if (!ptype) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return -1;
@@ -11937,15 +11937,14 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get ptype list");
- rte_free(ptype);
+ free(ptype);
return ret;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_mapping);
- ptype_mapping = rte_zmalloc("ptype_mapping", buff_size, 0);
+ ptype_mapping = calloc(ptype_num, sizeof(struct rte_pmd_i40e_ptype_mapping));
if (!ptype_mapping) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
- rte_free(ptype);
+ free(ptype);
return -1;
}
@@ -12083,8 +12082,8 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(ERR, "Failed to update ptype mapping table.");
- rte_free(ptype_mapping);
- rte_free(ptype);
+ free(ptype_mapping);
+ free(ptype);
return ret;
}
@@ -12119,7 +12118,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = proto_num * sizeof(struct rte_pmd_i40e_proto_info);
- proto = rte_zmalloc("new_proto", buff_size, 0);
+ proto = calloc(proto_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!proto) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return;
@@ -12131,7 +12130,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PROTOCOL_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get protocol list");
- rte_free(proto);
+ free(proto);
return;
}
@@ -12169,7 +12168,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(INFO, "No ptype is updated.");
- rte_free(proto);
+ free(proto);
}
/* Create a QinQ cloud filter
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 16/26] net/iavf: remove remnants of pipeline mode
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (14 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 15/26] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 17/26] net/iavf: do not use malloc in crypto VF commands Anatoly Burakov
` (9 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
When pipeline mode was removed, some of the things used by pipelines were
left in the code. Remove them as they are unused.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
drivers/net/intel/iavf/iavf_hash.c | 1 -
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
5 files changed, 19 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c
index 0ef6e0d04a..9eae874800 100644
--- a/drivers/net/intel/iavf/iavf_fdir.c
+++ b/drivers/net/intel/iavf/iavf_fdir.c
@@ -1632,7 +1632,6 @@ static struct iavf_flow_parser iavf_fdir_parser = {
.array = iavf_fdir_pattern,
.array_len = RTE_DIM(iavf_fdir_pattern),
.parse_pattern_action = iavf_fdir_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fdir_engine_register)
diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c
index cf1030320f..bfb34695de 100644
--- a/drivers/net/intel/iavf/iavf_fsub.c
+++ b/drivers/net/intel/iavf/iavf_fsub.c
@@ -814,7 +814,6 @@ iavf_flow_parser iavf_fsub_parser = {
.array = iavf_fsub_pattern_list,
.array_len = RTE_DIM(iavf_fsub_pattern_list),
.parse_pattern_action = iavf_fsub_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fsub_engine_init)
diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h
index b11bb4cf2b..b97cf8b7ff 100644
--- a/drivers/net/intel/iavf/iavf_generic_flow.h
+++ b/drivers/net/intel/iavf/iavf_generic_flow.h
@@ -485,20 +485,6 @@ enum iavf_flow_engine_type {
IAVF_FLOW_ENGINE_MAX,
};
-/**
- * classification stages.
- * for non-pipeline mode, we have two classification stages: Distributor/RSS
- * for pipeline-mode we have three classification stages:
- * Permission/Distributor/RSS
- */
-enum iavf_flow_classification_stage {
- IAVF_FLOW_STAGE_NONE = 0,
- IAVF_FLOW_STAGE_IPSEC_CRYPTO,
- IAVF_FLOW_STAGE_RSS,
- IAVF_FLOW_STAGE_DISTRIBUTOR,
- IAVF_FLOW_STAGE_MAX,
-};
-
/* Struct to store engine created. */
struct iavf_flow_engine {
TAILQ_ENTRY(iavf_flow_engine) node;
@@ -527,7 +513,6 @@ struct iavf_flow_parser {
struct iavf_pattern_match_item *array;
uint32_t array_len;
parse_pattern_action_t parse_pattern_action;
- enum iavf_flow_classification_stage stage;
};
/* Struct to store parser created. */
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index 1725c2b2b9..a40fed7542 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -691,7 +691,6 @@ static struct iavf_flow_parser iavf_hash_parser = {
.array = iavf_hash_pattern_list,
.array_len = RTE_DIM(iavf_hash_pattern_list),
.parse_pattern_action = iavf_hash_parse_pattern_action,
- .stage = IAVF_FLOW_STAGE_RSS,
};
int
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 6d41b1744e..66eaea8715 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -1983,7 +1983,6 @@ static struct iavf_flow_parser iavf_ipsec_flow_parser = {
.array = iavf_ipsec_flow_pattern,
.array_len = RTE_DIM(iavf_ipsec_flow_pattern),
.parse_pattern_action = iavf_ipsec_flow_parse,
- .stage = IAVF_FLOW_STAGE_IPSEC_CRYPTO,
};
RTE_INIT(iavf_ipsec_flow_engine_register)
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 17/26] net/iavf: do not use malloc in crypto VF commands
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (15 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 16/26] net/iavf: remove remnants of pipeline mode Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 18/26] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
` (8 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when calling down into the VF mailbox, IPsec code will use
dynamic memory allocation (rte_malloc one at that!) to allocate VF message
structures which are ~40 bytes in size, and then immediately frees them.
This is wasteful and unnecessary, so use stack allocation instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 157 +++++++--------------
1 file changed, 51 insertions(+), 106 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 66eaea8715..cb437d3212 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -458,36 +458,24 @@ static uint32_t
iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
struct rte_security_session_conf *conf)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- struct virtchnl_ipsec_sa_cfg *sa_cfg;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg sa_cfg;
+ } sa_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp;
+ } sa_resp;
+ struct inline_ipsec_msg *request = &sa_req.msg, *response = &sa_resp.msg;
+ struct virtchnl_ipsec_sa_cfg *sa_cfg = &sa_req.sa_cfg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg);
-
- request = rte_malloc("iavf-sad-add-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg_resp);
- response = rte_malloc("iavf-sad-add-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
/* set SA configuration params */
- sa_cfg = (struct virtchnl_ipsec_sa_cfg *)(request + 1);
-
sa_cfg->spi = conf->ipsec.spi;
sa_cfg->virtchnl_protocol_type = VIRTCHNL_PROTO_ESP;
sa_cfg->virtchnl_direction =
@@ -541,10 +529,10 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sa_req),
+ (uint8_t *)response, sizeof(sa_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -552,9 +540,6 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
rc = -EFAULT;
else
rc = response->ipsec_data.sa_cfg_resp->sa_handle;
-update_cleanup:
- rte_free(response);
- rte_free(request);
return rc;
}
@@ -722,19 +707,18 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
bool is_udp,
uint16_t udp_port)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg sp_cfg;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg_resp sp_cfg_resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg);
- request = rte_malloc("iavf-inbound-security-policy-add-request",
- request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -768,21 +752,12 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
request->ipsec_data.sp_cfg->is_udp = is_udp;
request->ipsec_data.sp_cfg->udp_port = htons(udp_port);
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg_resp);
- response = rte_malloc("iavf-inbound-security-policy-add-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -791,10 +766,6 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.sp_cfg_resp->rule_id;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -802,26 +773,17 @@ static uint32_t
iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_update sa_update;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp ipsec_resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg, *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_update);
- request = rte_malloc("iavf-sa-update-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-update-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_UPDATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -833,10 +795,10 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -845,10 +807,6 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.ipsec_resp->resp;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -899,26 +857,17 @@ int
iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
uint8_t is_v4, uint32_t flow_id)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_destroy sp_destroy;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg, *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_destroy);
- request = rte_malloc("iavf-sp-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sp-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -931,21 +880,17 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
response->req_id != request->req_id)
rc = -EFAULT;
else
- return response->ipsec_data.ipsec_status->status;
-
-update_cleanup:
- rte_free(request);
- rte_free(response);
+ rc = response->ipsec_data.ipsec_status->status;
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 18/26] net/iavf: decouple hash uninit from parser uninit
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (16 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 17/26] net/iavf: do not use malloc in crypto VF commands Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 19/26] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
` (7 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, parser deinitialization will trigger removal of current RSS
configuration. This should not be done as part of parser deinitialization,
but should rather be a separate step in dev close flow.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++++
drivers/net/intel/iavf/iavf_hash.c | 13 +++++++++----
drivers/net/intel/iavf/iavf_hash.h | 13 +++++++++++++
3 files changed, 26 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/intel/iavf/iavf_hash.h
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 802e095174..70eb7e7ec5 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -35,6 +35,7 @@
#include "iavf_generic_flow.h"
#include "rte_pmd_iavf.h"
#include "iavf_ipsec_crypto.h"
+#include "iavf_hash.h"
/* devargs */
#define IAVF_PROTO_XTR_ARG "proto_xtr"
@@ -2972,6 +2973,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
/* free iAVF security device context all related resources */
iavf_security_ctx_destroy(adapter);
+ /* remove RSS configuration */
+ iavf_hash_uninit(adapter);
+
iavf_flow_flush(dev, NULL);
iavf_flow_uninit(adapter);
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index a40fed7542..d864998402 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -22,6 +22,7 @@
#include "iavf_log.h"
#include "iavf.h"
#include "iavf_generic_flow.h"
+#include "iavf_hash.h"
#define IAVF_PHINT_NONE 0
#define IAVF_PHINT_GTPU BIT_ULL(0)
@@ -77,7 +78,7 @@ static int
iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
struct rte_flow_error *error);
static void
-iavf_hash_uninit(struct iavf_adapter *ad);
+iavf_hash_uninit_parser(struct iavf_adapter *ad);
static void
iavf_hash_free(struct rte_flow *flow);
static int
@@ -680,7 +681,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
.init = iavf_hash_init,
.create = iavf_hash_create,
.destroy = iavf_hash_destroy,
- .uninit = iavf_hash_uninit,
+ .uninit = iavf_hash_uninit_parser,
.free = iavf_hash_free,
.type = IAVF_FLOW_ENGINE_HASH,
};
@@ -1641,6 +1642,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
}
static void
+iavf_hash_uninit_parser(struct iavf_adapter *ad)
+{
+ iavf_unregister_parser(&iavf_hash_parser, ad);
+}
+
+void
iavf_hash_uninit(struct iavf_adapter *ad)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
@@ -1658,8 +1665,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
PMD_DRV_LOG(ERR, "fail to delete default RSS");
-
- iavf_unregister_parser(&iavf_hash_parser, ad);
}
static void
diff --git a/drivers/net/intel/iavf/iavf_hash.h b/drivers/net/intel/iavf/iavf_hash.h
new file mode 100644
index 0000000000..2348f32673
--- /dev/null
+++ b/drivers/net/intel/iavf/iavf_hash.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Intel Corporation
+ */
+
+#ifndef _IAVF_HASH_H_
+#define _IAVF_HASH_H_
+
+#include "iavf.h"
+
+void
+iavf_hash_uninit(struct iavf_adapter *ad);
+
+#endif /* _IAVF_HASH_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 19/26] net/iavf: avoid rte malloc in RSS configuration
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (17 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 18/26] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 20/26] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
` (6 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring RSS (redirection table, lookup table, and
hash key), we are using rte_zmalloc followed by an immediate rte_free.
This is not needed as this memory is not being stored anywhere, so
replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++--
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 70eb7e7ec5..d3fa47fd5e 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1554,7 +1554,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1574,7 +1574,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = iavf_configure_rss_lut(adapter);
if (ret) /* revert back */
rte_memcpy(vf->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 9ad39300c6..55986ef909 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1159,7 +1159,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
- rss_lut = rte_zmalloc("rss_lut", len, 0);
+ rss_lut = calloc(1, len);
if (!rss_lut)
return -ENOMEM;
@@ -1178,7 +1178,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_LUT");
- rte_free(rss_lut);
+ free(rss_lut);
return err;
}
@@ -1191,7 +1191,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
- rss_key = rte_zmalloc("rss_key", len, 0);
+ rss_key = calloc(1, len);
if (!rss_key)
return -ENOMEM;
@@ -1210,7 +1210,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_KEY");
- rte_free(rss_key);
+ free(rss_key);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 20/26] net/iavf: avoid rte malloc in MAC address operations
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (18 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 19/26] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 21/26] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
` (5 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This is not needed as this
memory is not being stored anywhere, so replace it with regular
malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 55986ef909..19dce17612 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1402,7 +1402,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
}
}
- list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
+ list = calloc(1, len);
if (!list) {
PMD_DRV_LOG(ERR, "fail to allocate memory");
return;
@@ -1434,7 +1434,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
+ free(list);
begin = next_begin;
} while (begin < IAVF_NUM_MACADDR_MAX);
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 21/26] net/iavf: avoid rte malloc in IPsec operations
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (19 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 20/26] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 22/26] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
` (4 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when performing IPsec security association operations and
retrieving device capabilities, we are using rte_malloc followed by
immediate rte_free. This is not needed as these structures are only
used temporarily, so replace it with stack-allocated structures for
small fixed-size messages and regular malloc/free for variable-sized
buffers.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 58 ++++++++--------------
1 file changed, 21 insertions(+), 37 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index cb437d3212..1a3004b0fc 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -3,6 +3,7 @@
*/
#include <stdalign.h>
+#include <stdlib.h>
#include <rte_cryptodev.h>
#include <rte_ethdev.h>
@@ -899,29 +900,18 @@ static uint32_t
iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_destroy sa_destroy;
+ } req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } resp;
+ struct inline_ipsec_msg *request = &req.msg, *response = &resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_destroy);
-
- request = rte_malloc("iavf-sa-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
-
- response = rte_malloc("iavf-sa-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -939,10 +929,10 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(req),
+ (uint8_t *)response, sizeof(resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -957,10 +947,6 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
response->ipsec_data.ipsec_status->status)
rc = -EFAULT;
-update_cleanup:
- rte_free(response);
- rte_free(request);
-
return rc;
}
@@ -1113,7 +1099,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-capability-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1121,8 +1107,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-capability-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1148,8 +1133,8 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
memcpy(capability, response->ipsec_data.ipsec_cap, sizeof(*capability));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
@@ -1538,7 +1523,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-status-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1546,8 +1531,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-status-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1573,8 +1557,8 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
memcpy(status, response->ipsec_data.ipsec_status, sizeof(*status));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 22/26] net/iavf: avoid rte malloc in queue operations
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (20 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 21/26] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 23/26] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
` (3 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when enabling, disabling, or switching queues, we are using
rte_malloc followed by an immediate rte_free. This is not needed as these
structures are not being stored anywhere, so replace them with stack
allocation or malloc/free where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 65 ++++++++++++-----------------
1 file changed, 26 insertions(+), 39 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 19dce17612..af1f5fbfc0 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1020,19 +1020,15 @@ int
iavf_enable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1048,7 +1044,7 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_ENABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1056,7 +1052,6 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_ENABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1064,19 +1059,15 @@ int
iavf_disable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1092,7 +1083,7 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1100,7 +1091,6 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1109,17 +1099,15 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
bool rx, bool on)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req;
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = 1;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1138,7 +1126,7 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
else
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1146,7 +1134,6 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
PMD_DRV_LOG(ERR, "Failed to execute command of %s",
on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1229,7 +1216,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
size = sizeof(*vc_config) +
sizeof(vc_config->qpair[0]) * num_queue_pairs;
- vc_config = rte_zmalloc("cfg_queue", size, 0);
+ vc_config = calloc(1, size);
if (!vc_config)
return -ENOMEM;
@@ -1292,7 +1279,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
PMD_DRV_LOG(ERR, "Failed to execute command of"
" VIRTCHNL_OP_CONFIG_VSI_QUEUES");
- rte_free(vc_config);
+ free(vc_config);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 23/26] net/iavf: avoid rte malloc in irq map config
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (21 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 22/26] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 24/26] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (2 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring IRQ maps, we are using rte_zmalloc followed
by an immediate rte_free. This is not needed as this memory is not being
stored anywhere, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index af1f5fbfc0..d0cc8673e1 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1295,7 +1295,7 @@ iavf_config_irq_map(struct iavf_adapter *adapter)
len = sizeof(struct virtchnl_irq_map_info) +
sizeof(struct virtchnl_vector_map) * vf->nb_msix;
- map_info = rte_zmalloc("map_info", len, 0);
+ map_info = calloc(1, len);
if (!map_info)
return -ENOMEM;
@@ -1319,7 +1319,7 @@ iavf_config_irq_map(struct iavf_adapter *adapter)
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
- rte_free(map_info);
+ free(map_info);
return err;
}
@@ -1337,7 +1337,7 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
len = sizeof(struct virtchnl_queue_vector_maps) +
sizeof(struct virtchnl_queue_vector) * (num - 1);
- map_info = rte_zmalloc("map_info", len, 0);
+ map_info = calloc(1, len);
if (!map_info)
return -ENOMEM;
@@ -1360,7 +1360,7 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
- rte_free(map_info);
+ free(map_info);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 24/26] net/ice: avoid rte malloc in RSS RETA operations
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (22 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 23/26] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 25/26] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 26/26] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
drivers/net/intel/ice/ice_ethdev.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 81da5a4656..037382b336 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -1338,7 +1338,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1358,7 +1358,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = ice_dcf_configure_rss_lut(hw);
if (ret) /* revert back */
rte_memcpy(hw->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_ethdev.c b/drivers/net/intel/ice/ice_ethdev.c
index ade13600de..fbd7c0f2f2 100644
--- a/drivers/net/intel/ice/ice_ethdev.c
+++ b/drivers/net/intel/ice/ice_ethdev.c
@@ -5583,7 +5583,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
/* It MUST use the current LUT size to get the RSS lookup table,
* otherwise if will fail with -100 error code.
*/
- lut = rte_zmalloc(NULL, RTE_MAX(reta_size, lut_size), 0);
+ lut = calloc(1, RTE_MAX(reta_size, lut_size));
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -5607,7 +5607,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
@@ -5632,7 +5632,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc(NULL, reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -5650,7 +5650,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 25/26] net/ice: avoid rte malloc in MAC address operations
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (23 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 24/26] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 26/26] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This is not needed as this
memory is not being stored anywhere, so replace it with regular
calloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 037382b336..d2a7a2847b 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -936,7 +936,7 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
len = sizeof(struct virtchnl_ether_addr_list);
len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
- list = rte_zmalloc(NULL, len, 0);
+ list = calloc(1, len);
if (!list) {
PMD_DRV_LOG(ERR, "fail to allocate memory");
return -ENOMEM;
@@ -961,7 +961,7 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
+ free(list);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v2 26/26] net/ice: avoid rte malloc in raw pattern parsing
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (24 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 25/26] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-10 16:13 ` Anatoly Burakov
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-10 16:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when parsing raw flow patterns, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as this memory is
not being stored anywhere, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_fdir_filter.c | 14 +++++++-------
drivers/net/intel/ice/ice_hash.c | 10 +++++-----
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index f7730ec6ab..fcb613590e 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -1879,13 +1879,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
pkt_len)
return -rte_errno;
- tmp_spec = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_spec = calloc(1, pkt_len / 2);
if (!tmp_spec)
return -rte_errno;
- tmp_mask = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_mask = calloc(1, pkt_len / 2);
if (!tmp_mask) {
- rte_free(tmp_spec);
+ free(tmp_spec);
return -rte_errno;
}
@@ -1950,13 +1950,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
filter->parser_ena = true;
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
break;
raw_error:
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
return ret_val;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index afdc8f220a..854c6e8dca 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -676,13 +676,13 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
pkt_len = spec_len / 2;
- pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+ pkt_buf = calloc(1, pkt_len);
if (!pkt_buf)
return -ENOMEM;
- msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+ msk_buf = calloc(1, pkt_len);
if (!msk_buf) {
- rte_free(pkt_buf);
+ free(pkt_buf);
return -ENOMEM;
}
@@ -733,8 +733,8 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
rte_memcpy(&meta->raw.prof, &prof, sizeof(prof));
free_mem:
- rte_free(pkt_buf);
- rte_free(msk_buf);
+ free(pkt_buf);
+ free(msk_buf);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (12 preceding siblings ...)
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
` (26 more replies)
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (5 subsequent siblings)
19 siblings, 27 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev
This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
IXGBE:
- Remove unnecessary macros and #ifdef's
- Disentangle unrelated flow API code paths
I40E:
- Get rid of global variables and unnecessary allocations
- Reduce code duplication around default RSS keys
- Use more appropriate integer types and definitions
IAVF:
- Remove dead code
- Remove unnecessary allocations
- Separate RSS uninit from hash flow parser uninit
ICE:
- Remove unnecessary allocations
This is done in preparation for further rework.
Note that this patchset depends on driver bug fix patchset [1].
[1] https://patches.dpdk.org/project/dpdk/list/?series=37333
v1 -> v2:
- Added more cleanups around rte_malloc usage
v2 -> v3:
- Reworded some commit messages
- Added a new patch for ICE
- Rebased on latest bug fix patches
Anatoly Burakov (27):
net/ixgbe: remove MAC type check macros
net/ixgbe: remove security-related ifdefery
net/ixgbe: split security and ntuple filters
net/i40e: get rid of global filter variables
net/i40e: make default RSS key global
net/i40e: use unsigned types for queue comparisons
net/i40e: use proper flex len define
net/i40e: remove global pattern variable
net/i40e: avoid rte malloc in tunnel set
net/i40e: avoid rte malloc in RSS RETA operations
net/i40e: avoid rte malloc in MAC/VLAN filtering
net/i40e: avoid rte malloc in VF resource queries
net/i40e: avoid rte malloc in adminq operations
net/i40e: avoid rte malloc in DDP package handling
net/i40e: avoid rte malloc in DDP ptype handling
net/iavf: remove remnants of pipeline mode
net/iavf: avoid rte malloc in VF mailbox for IPsec
net/iavf: decouple hash uninit from parser uninit
net/iavf: avoid rte malloc in RSS configuration
net/iavf: avoid rte malloc in MAC address operations
net/iavf: avoid rte malloc in IPsec operations
net/iavf: avoid rte malloc in queue operations
net/iavf: avoid rte malloc in irq map config
net/ice: avoid rte malloc in RSS RETA operations
net/ice: avoid rte malloc in MAC address operations
net/ice: avoid rte malloc in raw pattern parsing
net/ice: avoid rte malloc in flow pattern match
drivers/net/intel/i40e/i40e_ethdev.c | 223 +++++++++----------
drivers/net/intel/i40e/i40e_ethdev.h | 24 +-
drivers/net/intel/i40e/i40e_flow.c | 146 ++++++------
drivers/net/intel/i40e/i40e_hash.c | 27 ++-
drivers/net/intel/i40e/i40e_hash.h | 3 +
drivers/net/intel/i40e/i40e_pf.c | 26 +--
drivers/net/intel/i40e/rte_pmd_i40e.c | 59 ++---
drivers/net/intel/iavf/iavf_ethdev.c | 8 +-
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 --
drivers/net/intel/iavf/iavf_hash.c | 14 +-
drivers/net/intel/iavf/iavf_hash.h | 13 ++
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 216 ++++++------------
drivers/net/intel/iavf/iavf_vchnl.c | 85 +++----
drivers/net/intel/ice/ice_acl_filter.c | 3 +-
drivers/net/intel/ice/ice_dcf_ethdev.c | 8 +-
drivers/net/intel/ice/ice_ethdev.c | 8 +-
drivers/net/intel/ice/ice_fdir_filter.c | 19 +-
drivers/net/intel/ice/ice_generic_flow.c | 15 +-
drivers/net/intel/ice/ice_hash.c | 13 +-
drivers/net/intel/ice/ice_switch_filter.c | 5 +-
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 -
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 16 --
drivers/net/intel/ixgbe/ixgbe_flow.c | 162 +++++++++-----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 ---
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 -
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -
28 files changed, 513 insertions(+), 639 deletions(-)
create mode 100644 drivers/net/intel/iavf/iavf_hash.h
--
2.47.3
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v3 01/27] net/ixgbe: remove MAC type check macros
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 17:54 ` Medvedkin, Vladimir
2026-02-11 13:52 ` [PATCH v3 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
` (25 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The macros used were not informative and did not add any value beyond code
golf, so remove them and make MAC type checks explicit.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
2 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 5dbd659941..7dc02a472b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -137,18 +137,6 @@
#define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
#define IXGBE_MAX_L2_TN_FILTER_NUM 128
-#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
- return -ENOTSUP;\
-} while (0)
-
-#define MAC_TYPE_FILTER_SUP(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
- (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
- (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
- return -ENOTSUP;\
-} while (0)
-
/* Link speed for X550 auto negotiation */
#define IXGBE_LINK_SPEED_X550_AUTONEG (IXGBE_LINK_SPEED_100_FULL | \
IXGBE_LINK_SPEED_1GB_FULL | \
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 066a69eb12..61a328363d 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -621,7 +621,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP_EXT(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540)
+ return -ENOTSUP;
ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
@@ -861,7 +863,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_ethertype_filter(attr, pattern,
actions, filter, error);
@@ -1150,7 +1158,13 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_syn_filter(attr, pattern,
actions, filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 02/27] net/ixgbe: remove security-related ifdefery
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 17:56 ` Medvedkin, Vladimir
2026-02-11 13:52 ` [PATCH v3 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
` (24 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The security library is specified as explicit dependency for ixgbe, so
there is no more need to gate features behind #ifdef blocks that depend
on presence of this library.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 ------
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ---
drivers/net/intel/ixgbe/ixgbe_flow.c | 6 -----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --------------------
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 --
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -----
6 files changed, 52 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 11500a923c..57d929cf2c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -32,9 +32,7 @@
#include <rte_random.h>
#include <dev_driver.h>
#include <rte_hash_crc.h>
-#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
-#endif
#include <rte_os_shim.h>
#include "ixgbe_logs.h"
@@ -1177,11 +1175,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ixgbe_swfw_lock_reset(hw);
-#ifdef RTE_LIB_SECURITY
/* Initialize security_ctx only for primary process*/
if (ixgbe_ipsec_ctx_create(eth_dev))
return -ENOMEM;
-#endif
/* Initialize DCB configuration*/
memset(dcb_config, 0, sizeof(struct ixgbe_dcb_config));
@@ -1362,10 +1358,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
rte_free(eth_dev->data->hash_mac_addrs);
eth_dev->data->hash_mac_addrs = NULL;
err_exit:
-#ifdef RTE_LIB_SECURITY
rte_free(eth_dev->security_ctx);
eth_dev->security_ctx = NULL;
-#endif
return ret;
}
@@ -3148,10 +3142,8 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(dev);
-#ifdef RTE_LIB_SECURITY
rte_free(dev->security_ctx);
dev->security_ctx = NULL;
-#endif
return ret;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 7dc02a472b..32d7b98ed1 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -14,9 +14,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
-#ifdef RTE_LIB_SECURITY
#include "ixgbe_ipsec.h"
-#endif
#include <rte_flow.h>
#include <rte_time.h>
#include <rte_hash.h>
@@ -480,9 +478,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-#ifdef RTE_LIB_SECURITY
struct ixgbe_ipsec ipsec;
-#endif
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 61a328363d..c17c6c4bf6 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,7 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
-#ifdef RTE_LIB_SECURITY
/**
* Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
*/
@@ -249,7 +248,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
item->type == RTE_FLOW_ITEM_TYPE_IPV6);
}
-#endif
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -630,11 +628,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (filter->proto == IPPROTO_ESP)
return 0;
-#endif
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
@@ -3074,13 +3070,11 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (ntuple_filter.proto == IPPROTO_ESP) {
flow->is_security = true;
return flow;
}
-#endif
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
index 2857c19355..71deda9ed6 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
@@ -459,7 +459,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
-#ifdef RTE_LIB_SECURITY
if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
union ixgbe_crypto_tx_desc_md *md =
(union ixgbe_crypto_tx_desc_md *)mdata;
@@ -473,7 +472,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
tx_offload_mask.sa_idx |= ~0;
tx_offload_mask.sec_pad_len |= ~0;
}
-#endif
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -652,9 +650,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec;
-#endif
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -682,9 +678,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
-#ifdef RTE_LIB_SECURITY
use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
-#endif
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -696,7 +690,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
-#ifdef RTE_LIB_SECURITY
if (use_ipsec) {
union ixgbe_crypto_tx_desc_md *ipsec_mdata =
(union ixgbe_crypto_tx_desc_md *)
@@ -704,7 +697,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.sa_idx = ipsec_mdata->sa_idx;
tx_offload.sec_pad_len = ipsec_mdata->pad_len;
}
-#endif
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -896,10 +888,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
-#ifdef RTE_LIB_SECURITY
if (use_ipsec)
olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
-#endif
m_seg = tx_pkt;
do {
@@ -1523,13 +1513,11 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
}
-#ifdef RTE_LIB_SECURITY
if (rx_status & IXGBE_RXD_STAT_SECP) {
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
}
-#endif
return pkt_flags;
}
@@ -2490,9 +2478,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST) {
if (txq->tx_rs_thresh <= IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
@@ -2648,9 +2634,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
(txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = rte_eth_tx_pkt_prepare_dummy;
@@ -2711,10 +2695,8 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
-#endif
return tx_offload_capa;
}
@@ -2884,10 +2866,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
-#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
RTE_ETH_TX_OFFLOAD_SECURITY);
-#endif
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -3171,10 +3151,8 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
-#endif
return offloads;
}
@@ -5102,10 +5080,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ci_rx_queue *rxq = dev->data->rx_queues[i];
rxq->vector_rx = rx_using_sse;
-#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY);
-#endif
}
}
@@ -5611,7 +5587,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
ixgbe_setup_loopback_link_x540_x550(hw, true);
}
-#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
@@ -5624,7 +5599,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
return ret;
}
}
-#endif
return 0;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.h b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
index 7950e56ee4..33023a3580 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
@@ -99,11 +99,9 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
-#ifdef RTE_LIB_SECURITY
/* inline ipsec related*/
uint64_t sa_idx:8; /**< TX SA database entry index */
uint64_t sec_pad_len:4; /**< padding length */
-#endif
};
};
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
index dca3a20ca0..3f37038e5c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -21,7 +21,6 @@ ixgbe_rxq_rearm(struct ci_rx_queue *rxq)
ci_rxq_rearm(rxq, CI_RX_VEC_LEVEL_SSE);
}
-#ifdef RTE_LIB_SECURITY
static inline void
desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
@@ -56,7 +55,6 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
*rearm2 = _mm_extract_epi32(rearm, 2);
*rearm3 = _mm_extract_epi32(rearm, 3);
}
-#endif
static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
@@ -265,9 +263,7 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ci_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec = rxq->using_ipsec;
-#endif
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -444,10 +440,8 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
desc_to_olflags_v(descs, mbuf_init, vlan_flags, udp_p_flag,
&rx_pkts[pos]);
-#ifdef RTE_LIB_SECURITY
if (unlikely(use_ipsec))
desc_to_olflags_v_ipsec(descs, &rx_pkts[pos]);
-#endif
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 03/27] net/ixgbe: split security and ntuple filters
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 19:25 ` Medvedkin, Vladimir
2026-02-11 13:52 ` [PATCH v3 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
` (23 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
These filters are mashed together even though they almost do not share any
code at all between each other. Separate security filter from ntuple filter
and parse it separately.
While we're at it, we're making checks more stringent (such as checking for
NULL conf), and more type safe.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 136 ++++++++++++++++++---------
1 file changed, 91 insertions(+), 45 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index c17c6c4bf6..cd8d46019f 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,41 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- act = next_no_void_action(actions, NULL);
- if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- const void *conf = act->conf;
- /* check if the next not void item is END */
- act = next_no_void_action(actions, act);
- if (act->type != RTE_FLOW_ACTION_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "Not supported action.");
- return -rte_errno;
- }
-
- /* get the IP pattern*/
- item = next_no_void_pattern(pattern, NULL);
- while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- if (item->last ||
- item->type == RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "IP pattern missing.");
- return -rte_errno;
- }
- item = next_no_void_pattern(pattern, item);
- }
-
- filter->proto = IPPROTO_ESP;
- return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
- item->type == RTE_FLOW_ITEM_TYPE_IPV6);
- }
-
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -607,6 +572,81 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return 0;
}
+static int
+ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct rte_flow_action_security *security;
+ const struct rte_flow_item *item;
+ const struct rte_flow_action *act;
+
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error,
+ EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "NULL pattern.");
+ return -rte_errno;
+ }
+ if (actions == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "NULL action.");
+ return -rte_errno;
+ }
+ if (attr == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR,
+ NULL, "NULL attribute.");
+ return -rte_errno;
+ }
+
+ /* check if next non-void action is security */
+ act = next_no_void_action(actions, NULL);
+ if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+ security = act->conf;
+ if (security == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "NULL security action config.");
+ }
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+
+ return ixgbe_crypto_add_ingress_sa_from_flow(security->security_session,
+ item->spec, item->type == RTE_FLOW_ITEM_TYPE_IPV6);
+}
+
/* a specific function for ixgbe because the flags is specific */
static int
ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
@@ -628,10 +668,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- /* ESP flow not really a flow*/
- if (filter->proto == IPPROTO_ESP)
- return 0;
-
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -3066,16 +3102,19 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
-
- /* ESP flow not really a flow*/
- if (ntuple_filter.proto == IPPROTO_ESP) {
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret) {
flow->is_security = true;
return flow;
}
+ memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
+ actions, &ntuple_filter, error);
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
@@ -3299,6 +3338,13 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
+ return 0;
+
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 04/27] net/i40e: get rid of global filter variables
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (2 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 05/27] net/i40e: make default RSS key global Anatoly Burakov
` (22 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, i40e driver relies on global state to work around the fact that
`rte_flow_validate()` is being called directly from `rte_flow_create()`,
and it not being possible to pass state between two functions. Fix that by
making a small wrapper around validation that will create a dummy context.
Additionally, tunnel filter doesn't appear to be used by anything and so is
omitted from the structure.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 16 ++--
drivers/net/intel/i40e/i40e_flow.c | 117 ++++++++++++++-------------
2 files changed, 68 insertions(+), 65 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index cab6d7e9dc..0de036f2d9 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1304,12 +1304,14 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
-union i40e_filter_t {
- struct rte_eth_ethertype_filter ethertype_filter;
- struct i40e_fdir_filter_conf fdir_filter;
- struct rte_eth_tunnel_filter_conf tunnel_filter;
- struct i40e_tunnel_filter_conf consistent_tunnel_filter;
- struct i40e_rte_flow_rss_conf rss_conf;
+struct i40e_filter_ctx {
+ union {
+ struct rte_eth_ethertype_filter ethertype_filter;
+ struct i40e_fdir_filter_conf fdir_filter;
+ struct i40e_tunnel_filter_conf consistent_tunnel_filter;
+ struct i40e_rte_flow_rss_conf rss_conf;
+ };
+ enum rte_filter_type type;
};
typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
@@ -1317,7 +1319,7 @@ typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
struct i40e_valid_pattern {
enum rte_flow_item_type *items;
parse_filter_t parse_filter;
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2374b9bbca..e611de0c06 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -80,37 +80,37 @@ static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
struct i40e_ethertype_filter *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
@@ -124,7 +124,7 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int
i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
@@ -136,7 +136,7 @@ static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
const struct rte_flow_ops i40e_flow_ops = {
.validate = i40e_flow_validate,
.create = i40e_flow_create,
@@ -145,8 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-static union i40e_filter_t cons_filter;
-static enum rte_filter_type cons_filter_type = RTE_ETH_FILTER_NONE;
/* internal pattern w/o VOID items */
struct rte_flow_item g_items[32];
@@ -1454,10 +1452,9 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct rte_eth_ethertype_filter *ethertype_filter =
- &filter->ethertype_filter;
+ struct rte_eth_ethertype_filter *ethertype_filter = &filter->ethertype_filter;
int ret;
ret = i40e_flow_parse_ethertype_pattern(dev, pattern, error,
@@ -1474,7 +1471,7 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_ETHERTYPE;
+ filter->type = RTE_ETH_FILTER_ETHERTYPE;
return ret;
}
@@ -2549,7 +2546,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
struct i40e_fdir_filter_conf *fdir_filter = &filter->fdir_filter;
int ret;
@@ -2566,7 +2563,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_FDIR;
+ filter->type = RTE_ETH_FILTER_FDIR;
return 0;
}
@@ -2834,10 +2831,9 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_l4_pattern(pattern, error, tunnel_filter);
@@ -2852,7 +2848,7 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3086,10 +3082,9 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_vxlan_pattern(dev, pattern,
@@ -3105,7 +3100,7 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3338,10 +3333,9 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_nvgre_pattern(dev, pattern,
@@ -3357,7 +3351,7 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3495,10 +3489,9 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_mpls_pattern(dev, pattern,
@@ -3514,7 +3507,7 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3648,10 +3641,9 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_gtp_pattern(dev, pattern,
@@ -3667,7 +3659,7 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3766,10 +3758,9 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_qinq_pattern(dev, pattern,
@@ -3785,16 +3776,17 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
static int
-i40e_flow_validate(struct rte_eth_dev *dev,
+i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
+ struct i40e_filter_ctx *filter_ctx,
struct rte_flow_error *error)
{
struct rte_flow_item *items; /* internal pattern w/o VOID items */
@@ -3823,7 +3815,6 @@ i40e_flow_validate(struct rte_eth_dev *dev,
NULL, "NULL attribute.");
return -rte_errno;
}
- memset(&cons_filter, 0, sizeof(cons_filter));
/* Get the non-void item of action */
while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID)
@@ -3834,9 +3825,8 @@ i40e_flow_validate(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_HASH;
- return i40e_hash_parse(dev, pattern, actions + i,
- &cons_filter.rss_conf, error);
+ filter_ctx->type = RTE_ETH_FILTER_HASH;
+ return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error);
}
i = 0;
@@ -3878,8 +3868,7 @@ i40e_flow_validate(struct rte_eth_dev *dev,
}
if (parse_filter)
- ret = parse_filter(dev, attr, items, actions,
- error, &cons_filter);
+ ret = parse_filter(dev, attr, items, actions, error, filter_ctx);
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
@@ -3890,6 +3879,19 @@ i40e_flow_validate(struct rte_eth_dev *dev,
return ret;
}
+static int
+i40e_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ /* creates dummy context */
+ struct i40e_filter_ctx filter_ctx = {0};
+
+ return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
+}
+
static struct rte_flow *
i40e_flow_create(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -3898,15 +3900,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_filter_ctx filter_ctx = {0};
struct rte_flow *flow = NULL;
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
- ret = i40e_flow_validate(dev, attr, pattern, actions, error);
+ ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
if (ret < 0)
return NULL;
- if (cons_filter_type == RTE_ETH_FILTER_FDIR) {
+ if (filter_ctx.type == RTE_ETH_FILTER_FDIR) {
/* if this is the first time we're creating an fdir flow */
if (pf->fdir.fdir_vsi == NULL) {
ret = i40e_fdir_setup(pf);
@@ -3947,18 +3950,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
}
- switch (cons_filter_type) {
+ switch (filter_ctx.type) {
case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_set(pf,
- &cons_filter.ethertype_filter, 1);
+ ret = i40e_ethertype_filter_set(pf, &filter_ctx.ethertype_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->ethertype.ethertype_list,
i40e_ethertype_filter_list);
break;
case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev,
- &cons_filter.fdir_filter, 1);
+ ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
@@ -3966,14 +3967,14 @@ i40e_flow_create(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_dev_consistent_tunnel_filter_set(pf,
- &cons_filter.consistent_tunnel_filter, 1);
+ &filter_ctx.consistent_tunnel_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->tunnel.tunnel_list,
i40e_tunnel_filter_list);
break;
case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_create(pf, &cons_filter.rss_conf);
+ ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->rss_config_list,
@@ -3983,7 +3984,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
goto free_flow;
}
- flow->filter_type = cons_filter_type;
+ flow->filter_type = filter_ctx.type;
TAILQ_INSERT_TAIL(&pf->flow_list, flow, node);
return flow;
@@ -3992,7 +3993,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
"Failed to create flow.");
- if (cons_filter_type != RTE_ETH_FILTER_FDIR)
+ if (filter_ctx.type != RTE_ETH_FILTER_FDIR)
rte_free(flow);
else
i40e_fdir_entry_pool_put(fdir_info, flow);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 05/27] net/i40e: make default RSS key global
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (3 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
` (21 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, there are multiple places where we need a default RSS key, but
in each of those places we define it as a local variable. Make it global
constant, and adjust all callers to use the global constant. When dealing
with adminq, we cannot send down the constant because adminq commands do
not guarantee const-ness, so copy RSS key into a local buffer before
sending it down to hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 22 ++++++++++------------
drivers/net/intel/i40e/i40e_hash.c | 23 +++++++++++++++++------
drivers/net/intel/i40e/i40e_hash.h | 3 +++
3 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index c8153f3351..2deb87b01b 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9082,23 +9082,21 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
int
i40e_pf_reset_rss_key(struct i40e_pf *pf)
{
- const uint8_t key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- uint8_t *rss_key;
+ uint8_t key_buf[I40E_RSS_KEY_LEN];
+ const uint8_t *rss_key;
/* Reset key */
rss_key = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key;
- if (!rss_key ||
- pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < key_len) {
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+ if (!rss_key || pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < sizeof(key_buf))
+ rss_key = i40e_rss_key_default;
- rss_key = (uint8_t *)rss_key_default;
- }
+ /*
+ * adminq does not guarantee const-ness of RSS key once a command is sent down, so make a
+ * local copy.
+ */
+ memcpy(&key_buf, rss_key, sizeof(key_buf));
- return i40e_set_rss_key(pf->main_vsi, rss_key, key_len);
+ return i40e_set_rss_key(pf->main_vsi, key_buf, sizeof(key_buf));
}
static int
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index 3149682197..f20b40e7d0 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -233,6 +233,22 @@ struct i40e_hash_match_pattern {
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+const uint8_t i40e_rss_key_default[] = {
+ 0x44, 0x39, 0x79, 0x6b,
+ 0xb5, 0x4c, 0x50, 0x23,
+ 0xb6, 0x75, 0xea, 0x5b,
+ 0x12, 0x4f, 0x9f, 0x30,
+ 0xb8, 0xa2, 0xc0, 0x3d,
+ 0xdf, 0xdc, 0x4d, 0x02,
+ 0xa0, 0x8c, 0x9b, 0x33,
+ 0x4a, 0xf6, 0x4a, 0x4c,
+ 0x05, 0xc6, 0xfa, 0x34,
+ 0x39, 0x58, 0xd8, 0x55,
+ 0x7d, 0x99, 0x58, 0x3a,
+ 0xe1, 0x38, 0xc9, 0x2e,
+ 0x81, 0x15, 0x03, 0x66
+};
+
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
*/
@@ -910,17 +926,12 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act,
const uint8_t *key = rss_act->key;
if (!key || rss_act->key_len != sizeof(rss_conf->key)) {
- const uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
-
if (rss_act->key_len != sizeof(rss_conf->key))
PMD_DRV_LOG(WARNING,
"RSS key length invalid, must be %u bytes, now set key to default",
(uint32_t)sizeof(rss_conf->key));
- memcpy(rss_conf->key, rss_key_default, sizeof(rss_conf->key));
+ memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key));
} else {
memcpy(rss_conf->key, key, sizeof(rss_conf->key));
}
diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h
index ff8c91c030..2513d84565 100644
--- a/drivers/net/intel/i40e/i40e_hash.h
+++ b/drivers/net/intel/i40e/i40e_hash.h
@@ -27,6 +27,9 @@ int i40e_hash_filter_destroy(struct i40e_pf *pf,
const struct i40e_rss_filter *rss_filter);
int i40e_hash_filter_flush(struct i40e_pf *pf);
+#define I40E_RSS_KEY_LEN ((I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t))
+extern const uint8_t i40e_rss_key_default[I40E_RSS_KEY_LEN];
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 06/27] net/i40e: use unsigned types for queue comparisons
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (4 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 05/27] net/i40e: make default RSS key global Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 21:03 ` Morten Brørup
2026-02-11 13:52 ` [PATCH v3 07/27] net/i40e: use proper flex len define Anatoly Burakov
` (20 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when we compare queue numbers against maximum traffic class
value of 64, we do not use unsigned values, which results in compiler
warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
value. Make it unsigned, and adjust callers to use correct types. As a
consequence, `i40e_align_floor` now returns unsigned value as well - this
is correct, because nothing about that function implies signed usage being
a valid use case.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 2 +-
drivers/net/intel/i40e/i40e_ethdev.h | 6 +++---
drivers/net/intel/i40e/i40e_hash.c | 4 ++--
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 2deb87b01b..d5c61cd577 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9058,7 +9058,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
struct i40e_hw *hw = &pf->adapter->hw;
uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
- int num;
+ size_t num;
/* If both VMDQ and RSS enabled, not all of PF queues are
* configured. It's necessary to calculate the actual PF
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 0de036f2d9..d144297360 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -24,7 +24,7 @@
#define I40E_AQ_LEN 32
#define I40E_AQ_BUF_SZ 4096
/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
-#define I40E_MAX_Q_PER_TC 64
+#define I40E_MAX_Q_PER_TC 64U
#define I40E_NUM_DESC_DEFAULT 512
#define I40E_NUM_DESC_ALIGN 32
#define I40E_BUF_SIZE_MIN 1024
@@ -1517,8 +1517,8 @@ i40e_init_adminq_parameter(struct i40e_hw *hw)
hw->aq.asq_buf_size = I40E_AQ_BUF_SZ;
}
-static inline int
-i40e_align_floor(int n)
+static inline uint32_t
+i40e_align_floor(uint32_t n)
{
if (n == 0)
return 0;
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index f20b40e7d0..cbb377295d 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -949,7 +949,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
struct i40e_pf *pf;
struct i40e_hw *hw;
uint16_t i;
- int max_queue;
+ size_t max_queue;
hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (!rss_act->queue_num ||
@@ -971,7 +971,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
for (i = 0; i < rss_act->queue_num; i++) {
- if ((int)rss_act->queue[i] >= max_queue)
+ if (rss_act->queue[i] >= max_queue)
break;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 07/27] net/i40e: use proper flex len define
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (5 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 08/27] net/i40e: remove global pattern variable Anatoly Burakov
` (19 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
For FDIR, we have byte arrays that are supposed to be limited to whatever
the HW supports in terms of flex descriptor matching. However, in the
structure definition, spec and mask bytes are using different array length
defines, and the only reason why this works is because they evaluate to the
same value.
Use the i40e-specific definition instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index d144297360..025901edb6 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -631,7 +631,7 @@ struct i40e_fdir_flex_pit {
/* A structure used to contain extend input of flow */
struct i40e_fdir_flow_ext {
uint16_t vlan_tci;
- uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
+ uint8_t flexbytes[I40E_FDIR_MAX_FLEX_LEN];
/* It is filled by the flexible payload to match. */
uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN];
uint8_t raw_id;
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 08/27] net/i40e: remove global pattern variable
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (6 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 07/27] net/i40e: use proper flex len define Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
` (18 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
When parsing flow patterns, current code cleans up the pattern list by
removing void flow items, and copies the patterns into an array. That
array, when dealing with under 32 flow items, is allocated on the stack,
but when the pattern is big enough, a new list is dynamically allocated.
This allocated list is a global variable, and is allocated using
rte_zmalloc call which seems like overkill for this use case.
Remove the global variable, and replace the split behavior with
unconditional allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 29 +++++++++--------------------
1 file changed, 9 insertions(+), 20 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index e611de0c06..c5bb787f28 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -145,9 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-/* internal pattern w/o VOID items */
-struct rte_flow_item g_items[32];
-
/* Pattern matched ethertype filter */
static enum rte_flow_item_type pattern_ethertype[] = {
RTE_FLOW_ITEM_TYPE_ETH,
@@ -3837,19 +3834,13 @@ i40e_flow_check(struct rte_eth_dev *dev,
i++;
}
item_num++;
-
- if (item_num <= ARRAY_SIZE(g_items)) {
- items = g_items;
- } else {
- items = rte_zmalloc("i40e_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
- if (!items) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL,
- "No memory for PMD internal items.");
- return -ENOMEM;
- }
+ items = calloc(item_num, sizeof(struct rte_flow_item));
+ if (items == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL,
+ "No memory for PMD internal items.");
+ return -ENOMEM;
}
i40e_pattern_skip_void_item(items, pattern);
@@ -3862,8 +3853,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- if (items != g_items)
- rte_free(items);
+ free(items);
return -rte_errno;
}
@@ -3873,8 +3863,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
- if (items != g_items)
- rte_free(items);
+ free(items);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 09/27] net/i40e: avoid rte malloc in tunnel set
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (7 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 08/27] net/i40e: remove global pattern variable Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (17 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when setting tunnel configuration, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as this memory is
not being stored anywhere, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 124 ++++++++++++---------------
1 file changed, 53 insertions(+), 71 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index d5c61cd577..06430e6319 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8511,38 +8511,27 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
struct i40e_pf_vf *vf = NULL;
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_vsi *vsi;
- struct i40e_aqc_cloud_filters_element_bb *cld_filter;
- struct i40e_aqc_cloud_filters_element_bb *pfilter;
+ struct i40e_aqc_cloud_filters_element_bb cld_filter = {0};
struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *tunnel, *node;
+ struct i40e_tunnel_filter *node;
struct i40e_tunnel_filter check_filter; /* Check if filter exists */
uint32_t teid_le;
bool big_buffer = 0;
- cld_filter = rte_zmalloc("tunnel_filter",
- sizeof(struct i40e_aqc_add_rm_cloud_filt_elem_ext),
- 0);
-
- if (cld_filter == NULL) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- return -ENOMEM;
- }
- pfilter = cld_filter;
-
rte_ether_addr_copy(&tunnel_filter->outer_mac,
- (struct rte_ether_addr *)&pfilter->element.outer_mac);
+ (struct rte_ether_addr *)&cld_filter.element.outer_mac);
rte_ether_addr_copy(&tunnel_filter->inner_mac,
- (struct rte_ether_addr *)&pfilter->element.inner_mac);
+ (struct rte_ether_addr *)&cld_filter.element.inner_mac);
- pfilter->element.inner_vlan =
+ cld_filter.element.inner_vlan =
rte_cpu_to_le_16(tunnel_filter->inner_vlan);
if (tunnel_filter->ip_type == I40E_TUNNEL_IPTYPE_IPV4) {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v4.data,
&ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
+ sizeof(cld_filter.element.ipaddr.v4.data));
} else {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
for (i = 0; i < 4; i++) {
@@ -8550,9 +8539,9 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
rte_cpu_to_le_32(rte_be_to_cpu_32(
tunnel_filter->ip_addr.ipv6_addr[i]));
}
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v6.data,
&convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
+ sizeof(cld_filter.element.ipaddr.v6.data));
}
/* check tunneled type */
@@ -8573,11 +8562,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x40;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOUDP;
@@ -8589,11 +8578,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x0;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOGRE;
@@ -8605,11 +8594,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8620,11 +8609,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8641,8 +8630,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
* Big Buffer should be set, see changes in
* i40e_aq_add_cloud_filters
*/
- pfilter->general_fields[0] = tunnel_filter->inner_vlan;
- pfilter->general_fields[1] = tunnel_filter->outer_vlan;
+ cld_filter.general_fields[0] = tunnel_filter->inner_vlan;
+ cld_filter.general_fields[1] = tunnel_filter->outer_vlan;
big_buffer = 1;
break;
case I40E_CLOUD_TYPE_UDP:
@@ -8657,20 +8646,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->sport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
} else {
@@ -8682,20 +8671,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->dport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
}
@@ -8704,48 +8693,46 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
default:
/* Other tunnel types is not supported. */
PMD_DRV_LOG(ERR, "tunnel type is not supported.");
- rte_free(cld_filter);
return -EINVAL;
}
if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoUDP)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoGRE)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPC)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPU)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_QINQ)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_SCTP) {
if (tunnel_filter->l4_port_type == I40E_L4_PORT_TYPE_SRC)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
} else {
val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
- &pfilter->element.flags);
+ &cld_filter.element.flags);
if (val < 0) {
- rte_free(cld_filter);
return -EINVAL;
}
}
- pfilter->element.flags |= rte_cpu_to_le_16(
- I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE |
- ip_type | (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
- pfilter->element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->element.queue_number =
+ cld_filter.element.flags |=
+ rte_cpu_to_le_16(I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+ (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
+ cld_filter.element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
+ cld_filter.element.queue_number =
rte_cpu_to_le_16(tunnel_filter->queue_id);
if (!tunnel_filter->is_to_vf)
@@ -8753,7 +8740,6 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
else {
if (tunnel_filter->vf_id >= pf->vf_num) {
PMD_DRV_LOG(ERR, "Invalid argument.");
- rte_free(cld_filter);
return -EINVAL;
}
vf = &pf->vfs[tunnel_filter->vf_id];
@@ -8762,38 +8748,36 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
/* Check if there is the filter in SW list */
memset(&check_filter, 0, sizeof(check_filter));
- i40e_tunnel_filter_convert(cld_filter, &check_filter);
+ i40e_tunnel_filter_convert(&cld_filter, &check_filter);
check_filter.is_to_vf = tunnel_filter->is_to_vf;
check_filter.vf_id = tunnel_filter->vf_id;
node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &check_filter.input);
if (add && node) {
PMD_DRV_LOG(ERR, "Conflict with existing tunnel rules!");
- rte_free(cld_filter);
return -EINVAL;
}
if (!add && !node) {
PMD_DRV_LOG(ERR, "There's no corresponding tunnel filter!");
- rte_free(cld_filter);
return -EINVAL;
}
if (add) {
+ struct i40e_tunnel_filter *tunnel;
+
if (big_buffer)
ret = i40e_aq_add_cloud_filters_bb(hw,
- vsi->seid, cld_filter, 1);
+ vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_add_cloud_filters(hw,
- vsi->seid, &cld_filter->element, 1);
+ vsi->seid, &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to add a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
tunnel = rte_zmalloc("tunnel_filter", sizeof(*tunnel), 0);
if (tunnel == NULL) {
PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- rte_free(cld_filter);
return -ENOMEM;
}
@@ -8804,19 +8788,17 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
} else {
if (big_buffer)
ret = i40e_aq_rem_cloud_filters_bb(
- hw, vsi->seid, cld_filter, 1);
+ hw, vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter->element, 1);
+ &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to delete a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
ret = i40e_sw_tunnel_filter_del(pf, &node->input);
}
- rte_free(cld_filter);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 10/27] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (8 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-16 17:06 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
` (16 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 06430e6319..654b0e5d16 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4630,7 +4630,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -4649,7 +4649,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
pf->adapter->rss_reta_updated = 1;
out:
- rte_free(lut);
+ free(lut);
return ret;
}
@@ -4673,7 +4673,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -4690,7 +4690,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (9 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-16 17:08 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
` (15 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding, removing, or configuring MAC and VLAN filters,
we are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 38 +++++++++++++--------------
drivers/net/intel/i40e/rte_pmd_i40e.c | 16 +++++------
2 files changed, 26 insertions(+), 28 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 654b0e5d16..806c29368c 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4165,8 +4165,7 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
i = 0;
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
+ mac_filter = calloc(num, sizeof(*mac_filter));
if (mac_filter == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -4206,7 +4205,7 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (ret)
PMD_DRV_LOG(ERR, "i40e vsi add mac fail.");
}
- rte_free(mac_filter);
+ free(mac_filter);
}
if (mask & RTE_ETH_QINQ_STRIP_MASK) {
@@ -6231,8 +6230,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
+ mac_filter = calloc(num, sizeof(*mac_filter));
if (mac_filter == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -6264,7 +6262,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
}
DONE:
- rte_free(mac_filter);
+ free(mac_filter);
return ret;
}
@@ -7154,7 +7152,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0);
+ req_list = calloc(1, ele_buff_size);
if (req_list == NULL) {
PMD_DRV_LOG(ERR, "Fail to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7207,7 +7205,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
} while (num < total);
DONE:
- rte_free(req_list);
+ free(req_list);
return ret;
}
@@ -7230,7 +7228,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0);
+ req_list = calloc(1, ele_buff_size);
if (req_list == NULL) {
PMD_DRV_LOG(ERR, "Fail to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7286,7 +7284,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
} while (num < total);
DONE:
- rte_free(req_list);
+ free(req_list);
return ret;
}
@@ -7455,7 +7453,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
else
num = vsi->mac_num * vsi->vlan_num;
- mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0);
+ mv_f = calloc(num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7484,7 +7482,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
ret = i40e_remove_macvlan_filters(vsi, mv_f, num);
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7510,7 +7508,7 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
return I40E_ERR_PARAM;
}
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
+ mv_f = calloc(mac_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
@@ -7532,7 +7530,7 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
vsi->vlan_num++;
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7561,7 +7559,7 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
return I40E_ERR_PARAM;
}
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
+ mv_f = calloc(mac_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
@@ -7594,7 +7592,7 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
vsi->vlan_num--;
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7626,7 +7624,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
mac_filter->filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7665,7 +7663,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7696,7 +7694,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7725,7 +7723,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index a358f68bc5..fb73fa924f 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -233,7 +233,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -250,18 +250,18 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
@@ -294,7 +294,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
f->mac_info.filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -312,18 +312,18 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 12/27] net/i40e: avoid rte malloc in VF resource queries
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (10 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-16 17:09 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
` (14 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when responding to VF resource queries, we are dynamically
allocating a temporary buffer with rte_zmalloc followed by an immediate
rte_free. This is not needed as the response is only used temporarily,
so replace it with a stack-allocated structure (the allocation is fixed
in size and pretty small).
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_pf.c | 26 +++++++-------------------
1 file changed, 7 insertions(+), 19 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_pf.c b/drivers/net/intel/i40e/i40e_pf.c
index ebe1deeade..2a5637b0c1 100644
--- a/drivers/net/intel/i40e/i40e_pf.c
+++ b/drivers/net/intel/i40e/i40e_pf.c
@@ -309,9 +309,14 @@ static int
i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
bool b_op)
{
- struct virtchnl_vf_resource *vf_res = NULL;
+ /* only have 1 VSI by default */
+ struct {
+ struct virtchnl_vf_resource vf_res;
+ struct virtchnl_vsi_resource vsi_res;
+ } res = {0};
+ struct virtchnl_vf_resource *vf_res = &res.vf_res;
+ uint32_t len = sizeof(res);
struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
- uint32_t len = 0;
uint64_t default_hena = I40E_RSS_HENA_ALL;
int ret = I40E_SUCCESS;
@@ -322,20 +327,6 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
return ret;
}
- /* only have 1 VSI by default */
- len = sizeof(struct virtchnl_vf_resource) +
- I40E_DEFAULT_VF_VSI_NUM *
- sizeof(struct virtchnl_vsi_resource);
-
- vf_res = rte_zmalloc("i40e_vf_res", len, 0);
- if (vf_res == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate mem");
- ret = I40E_ERR_NO_MEMORY;
- vf_res = NULL;
- len = 0;
- goto send_msg;
- }
-
if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
VIRTCHNL_VF_OFFLOAD_VLAN;
@@ -377,11 +368,8 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
rte_ether_addr_copy(&vf->mac_addr,
(struct rte_ether_addr *)vf_res->vsi_res[0].default_mac_addr);
-send_msg:
i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_GET_VF_RESOURCES,
ret, (uint8_t *)vf_res, len);
- rte_free(vf_res);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 13/27] net/i40e: avoid rte malloc in adminq operations
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (11 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-16 17:11 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
` (13 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing admin queue messages, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as the message
buffer is only used temporarily within the function scope, so replace it
with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 806c29368c..2e0c2e2482 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6896,7 +6896,7 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
int ret;
info.buf_len = I40E_AQ_BUF_SZ;
- info.msg_buf = rte_zmalloc("msg_buffer", info.buf_len, 0);
+ info.msg_buf = calloc(1, info.buf_len);
if (!info.msg_buf) {
PMD_DRV_LOG(ERR, "Failed to allocate mem");
return;
@@ -6936,7 +6936,7 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
break;
}
}
- rte_free(info.msg_buf);
+ free(info.msg_buf);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 14/27] net/i40e: avoid rte malloc in DDP package handling
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (12 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-16 17:12 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
` (12 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing Dynamic Driver Profile (DDP) packages and
checking profile information, we are using rte_zmalloc followed by
immediate rte_free. This is not needed as these buffers are only used
temporarily, so replace it with stack-allocated structures or regular
malloc/free where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/rte_pmd_i40e.c | 43 +++++++++------------------
1 file changed, 14 insertions(+), 29 deletions(-)
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index fb73fa924f..a2e24b5ea2 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -1569,9 +1569,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
PMD_DRV_LOG(INFO, "Read-only profile.");
return 0;
}
- buff = rte_zmalloc("pinfo_list",
- (I40E_PROFILE_INFO_SIZE * I40E_MAX_PROFILE_NUM + 4),
- 0);
+ buff = calloc(I40E_MAX_PROFILE_NUM + 4, I40E_PROFILE_INFO_SIZE);
if (!buff) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return -1;
@@ -1583,7 +1581,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
0, NULL);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get profile info list.");
- rte_free(buff);
+ free(buff);
return -1;
}
p_list = (struct rte_pmd_i40e_profile_list *)buff;
@@ -1591,20 +1589,20 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
p = &p_list->p_info[i];
if (pinfo->track_id == p->track_id) {
PMD_DRV_LOG(INFO, "Profile exists.");
- rte_free(buff);
+ free(buff);
return 1;
}
}
/* profile with group id 0xff is compatible with any other profile */
if ((pinfo->track_id & group_mask) == group_mask) {
- rte_free(buff);
+ free(buff);
return 0;
}
for (i = 0; i < p_list->p_count; i++) {
p = &p_list->p_info[i];
if ((p->track_id & group_mask) == 0) {
PMD_DRV_LOG(INFO, "Profile of the group 0 exists.");
- rte_free(buff);
+ free(buff);
return 2;
}
}
@@ -1615,12 +1613,12 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
if ((pinfo->track_id & group_mask) !=
(p->track_id & group_mask)) {
PMD_DRV_LOG(INFO, "Profile of different group exists.");
- rte_free(buff);
+ free(buff);
return 3;
}
}
- rte_free(buff);
+ free(buff);
return 0;
}
@@ -1636,7 +1634,10 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
struct i40e_generic_seg_header *profile_seg_hdr;
struct i40e_generic_seg_header *metadata_seg_hdr;
uint32_t track_id;
- uint8_t *profile_info_sec;
+ struct {
+ struct i40e_profile_section_header sec;
+ struct i40e_profile_info info;
+ } profile_info_sec = {0};
int is_exist;
enum i40e_status_code status = I40E_SUCCESS;
static const uint32_t type_mask = 0xff000000;
@@ -1701,26 +1702,15 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
return -EINVAL;
}
- profile_info_sec = rte_zmalloc(
- "i40e_profile_info",
- sizeof(struct i40e_profile_section_header) +
- sizeof(struct i40e_profile_info),
- 0);
- if (!profile_info_sec) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- return -EINVAL;
- }
-
/* Check if the profile already loaded */
i40e_generate_profile_info_sec(
((struct i40e_profile_segment *)profile_seg_hdr)->name,
&((struct i40e_profile_segment *)profile_seg_hdr)->version,
- track_id, profile_info_sec,
+ track_id, (uint8_t* )&profile_info_sec,
op == RTE_PMD_I40E_PKG_OP_WR_ADD);
- is_exist = i40e_check_profile_info(port, profile_info_sec);
+ is_exist = i40e_check_profile_info(port, (uint8_t* )&profile_info_sec);
if (is_exist < 0) {
PMD_DRV_LOG(ERR, "Failed to check profile.");
- rte_free(profile_info_sec);
return -EINVAL;
}
@@ -1733,13 +1723,11 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
else if (is_exist == 3)
PMD_DRV_LOG(ERR, "Profile of different group already exists");
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return -EEXIST;
}
} else if (op == RTE_PMD_I40E_PKG_OP_WR_DEL) {
if (is_exist != 1) {
PMD_DRV_LOG(ERR, "Profile does not exist.");
- rte_free(profile_info_sec);
return -EACCES;
}
}
@@ -1751,7 +1739,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
track_id);
if (status) {
PMD_DRV_LOG(ERR, "Failed to write profile for delete.");
- rte_free(profile_info_sec);
return status;
}
} else {
@@ -1764,14 +1751,13 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
PMD_DRV_LOG(ERR, "Failed to write profile for add.");
else
PMD_DRV_LOG(ERR, "Failed to write profile.");
- rte_free(profile_info_sec);
return status;
}
}
if (track_id && (op != RTE_PMD_I40E_PKG_OP_WR_ONLY)) {
/* Modify loaded profiles info list */
- status = i40e_add_rm_profile_info(hw, profile_info_sec);
+ status = i40e_add_rm_profile_info(hw, (uint8_t* )&profile_info_sec);
if (status) {
if (op == RTE_PMD_I40E_PKG_OP_WR_ADD)
PMD_DRV_LOG(ERR, "Failed to add profile to info list.");
@@ -1784,7 +1770,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
op == RTE_PMD_I40E_PKG_OP_WR_DEL)
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return status;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 15/27] net/i40e: avoid rte malloc in DDP ptype handling
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (13 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-16 17:13 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
` (11 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating customized protocol and packet type information
via DDP packages, we are using rte_zmalloc followed by immediate rte_free.
This is not needed as these buffers are only used temporarily within
the function scope, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 25 ++++++++++++-------------
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 2e0c2e2482..f27fbf89ee 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -11787,7 +11787,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = pctype_num * sizeof(struct rte_pmd_i40e_proto_info);
- pctype = rte_zmalloc("new_pctype", buff_size, 0);
+ pctype = calloc(pctype_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!pctype) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return -1;
@@ -11798,7 +11798,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PCTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get pctype list");
- rte_free(pctype);
+ free(pctype);
return -1;
}
@@ -11879,7 +11879,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
}
- rte_free(pctype);
+ free(pctype);
return 0;
}
@@ -11925,7 +11925,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_info);
- ptype = rte_zmalloc("new_ptype", buff_size, 0);
+ ptype = calloc(ptype_num, sizeof(struct rte_pmd_i40e_ptype_info));
if (!ptype) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return -1;
@@ -11937,15 +11937,14 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get ptype list");
- rte_free(ptype);
+ free(ptype);
return ret;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_mapping);
- ptype_mapping = rte_zmalloc("ptype_mapping", buff_size, 0);
+ ptype_mapping = calloc(ptype_num, sizeof(struct rte_pmd_i40e_ptype_mapping));
if (!ptype_mapping) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
- rte_free(ptype);
+ free(ptype);
return -1;
}
@@ -12083,8 +12082,8 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(ERR, "Failed to update ptype mapping table.");
- rte_free(ptype_mapping);
- rte_free(ptype);
+ free(ptype_mapping);
+ free(ptype);
return ret;
}
@@ -12119,7 +12118,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = proto_num * sizeof(struct rte_pmd_i40e_proto_info);
- proto = rte_zmalloc("new_proto", buff_size, 0);
+ proto = calloc(proto_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!proto) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return;
@@ -12131,7 +12130,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PROTOCOL_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get protocol list");
- rte_free(proto);
+ free(proto);
return;
}
@@ -12169,7 +12168,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(INFO, "No ptype is updated.");
- rte_free(proto);
+ free(proto);
}
/* Create a QinQ cloud filter
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 16/27] net/iavf: remove remnants of pipeline mode
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (14 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
` (10 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
When pipeline mode was removed, some of the things used by pipelines were
left in the code. Remove them as they are unused.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
drivers/net/intel/iavf/iavf_hash.c | 1 -
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
5 files changed, 19 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c
index 0ef6e0d04a..9eae874800 100644
--- a/drivers/net/intel/iavf/iavf_fdir.c
+++ b/drivers/net/intel/iavf/iavf_fdir.c
@@ -1632,7 +1632,6 @@ static struct iavf_flow_parser iavf_fdir_parser = {
.array = iavf_fdir_pattern,
.array_len = RTE_DIM(iavf_fdir_pattern),
.parse_pattern_action = iavf_fdir_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fdir_engine_register)
diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c
index cf1030320f..bfb34695de 100644
--- a/drivers/net/intel/iavf/iavf_fsub.c
+++ b/drivers/net/intel/iavf/iavf_fsub.c
@@ -814,7 +814,6 @@ iavf_flow_parser iavf_fsub_parser = {
.array = iavf_fsub_pattern_list,
.array_len = RTE_DIM(iavf_fsub_pattern_list),
.parse_pattern_action = iavf_fsub_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fsub_engine_init)
diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h
index b11bb4cf2b..b97cf8b7ff 100644
--- a/drivers/net/intel/iavf/iavf_generic_flow.h
+++ b/drivers/net/intel/iavf/iavf_generic_flow.h
@@ -485,20 +485,6 @@ enum iavf_flow_engine_type {
IAVF_FLOW_ENGINE_MAX,
};
-/**
- * classification stages.
- * for non-pipeline mode, we have two classification stages: Distributor/RSS
- * for pipeline-mode we have three classification stages:
- * Permission/Distributor/RSS
- */
-enum iavf_flow_classification_stage {
- IAVF_FLOW_STAGE_NONE = 0,
- IAVF_FLOW_STAGE_IPSEC_CRYPTO,
- IAVF_FLOW_STAGE_RSS,
- IAVF_FLOW_STAGE_DISTRIBUTOR,
- IAVF_FLOW_STAGE_MAX,
-};
-
/* Struct to store engine created. */
struct iavf_flow_engine {
TAILQ_ENTRY(iavf_flow_engine) node;
@@ -527,7 +513,6 @@ struct iavf_flow_parser {
struct iavf_pattern_match_item *array;
uint32_t array_len;
parse_pattern_action_t parse_pattern_action;
- enum iavf_flow_classification_stage stage;
};
/* Struct to store parser created. */
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index 1725c2b2b9..a40fed7542 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -691,7 +691,6 @@ static struct iavf_flow_parser iavf_hash_parser = {
.array = iavf_hash_pattern_list,
.array_len = RTE_DIM(iavf_hash_pattern_list),
.parse_pattern_action = iavf_hash_parse_pattern_action,
- .stage = IAVF_FLOW_STAGE_RSS,
};
int
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 6d41b1744e..66eaea8715 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -1983,7 +1983,6 @@ static struct iavf_flow_parser iavf_ipsec_flow_parser = {
.array = iavf_ipsec_flow_pattern,
.array_len = RTE_DIM(iavf_ipsec_flow_pattern),
.parse_pattern_action = iavf_ipsec_flow_parse,
- .stage = IAVF_FLOW_STAGE_IPSEC_CRYPTO,
};
RTE_INIT(iavf_ipsec_flow_engine_register)
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (15 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
@ 2026-02-11 13:52 ` Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 18/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
` (9 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:52 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when calling down into the VF mailbox, IPsec code will use
dynamic memory allocation (rte_malloc one at that!) to allocate VF message
structures which are ~40 bytes in size, and then immediately frees them.
This is wasteful and unnecessary, so use stack allocation instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 157 +++++++--------------
1 file changed, 51 insertions(+), 106 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 66eaea8715..cb437d3212 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -458,36 +458,24 @@ static uint32_t
iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
struct rte_security_session_conf *conf)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- struct virtchnl_ipsec_sa_cfg *sa_cfg;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg sa_cfg;
+ } sa_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp;
+ } sa_resp;
+ struct inline_ipsec_msg *request = &sa_req.msg, *response = &sa_resp.msg;
+ struct virtchnl_ipsec_sa_cfg *sa_cfg = &sa_req.sa_cfg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg);
-
- request = rte_malloc("iavf-sad-add-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg_resp);
- response = rte_malloc("iavf-sad-add-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
/* set SA configuration params */
- sa_cfg = (struct virtchnl_ipsec_sa_cfg *)(request + 1);
-
sa_cfg->spi = conf->ipsec.spi;
sa_cfg->virtchnl_protocol_type = VIRTCHNL_PROTO_ESP;
sa_cfg->virtchnl_direction =
@@ -541,10 +529,10 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sa_req),
+ (uint8_t *)response, sizeof(sa_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -552,9 +540,6 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
rc = -EFAULT;
else
rc = response->ipsec_data.sa_cfg_resp->sa_handle;
-update_cleanup:
- rte_free(response);
- rte_free(request);
return rc;
}
@@ -722,19 +707,18 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
bool is_udp,
uint16_t udp_port)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg sp_cfg;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg_resp sp_cfg_resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg);
- request = rte_malloc("iavf-inbound-security-policy-add-request",
- request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -768,21 +752,12 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
request->ipsec_data.sp_cfg->is_udp = is_udp;
request->ipsec_data.sp_cfg->udp_port = htons(udp_port);
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg_resp);
- response = rte_malloc("iavf-inbound-security-policy-add-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -791,10 +766,6 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.sp_cfg_resp->rule_id;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -802,26 +773,17 @@ static uint32_t
iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_update sa_update;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp ipsec_resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg, *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_update);
- request = rte_malloc("iavf-sa-update-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-update-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_UPDATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -833,10 +795,10 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -845,10 +807,6 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.ipsec_resp->resp;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -899,26 +857,17 @@ int
iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
uint8_t is_v4, uint32_t flow_id)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_destroy sp_destroy;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg, *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_destroy);
- request = rte_malloc("iavf-sp-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sp-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -931,21 +880,17 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
response->req_id != request->req_id)
rc = -EFAULT;
else
- return response->ipsec_data.ipsec_status->status;
-
-update_cleanup:
- rte_free(request);
- rte_free(response);
+ rc = response->ipsec_data.ipsec_status->status;
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 18/27] net/iavf: decouple hash uninit from parser uninit
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (16 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
@ 2026-02-11 13:53 ` Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
` (8 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:53 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, parser deinitialization will trigger removal of current RSS
configuration. This should not be done as part of parser deinitialization,
but should rather be a separate step in dev close flow.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++++
drivers/net/intel/iavf/iavf_hash.c | 13 +++++++++----
drivers/net/intel/iavf/iavf_hash.h | 13 +++++++++++++
3 files changed, 26 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/intel/iavf/iavf_hash.h
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 802e095174..70eb7e7ec5 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -35,6 +35,7 @@
#include "iavf_generic_flow.h"
#include "rte_pmd_iavf.h"
#include "iavf_ipsec_crypto.h"
+#include "iavf_hash.h"
/* devargs */
#define IAVF_PROTO_XTR_ARG "proto_xtr"
@@ -2972,6 +2973,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
/* free iAVF security device context all related resources */
iavf_security_ctx_destroy(adapter);
+ /* remove RSS configuration */
+ iavf_hash_uninit(adapter);
+
iavf_flow_flush(dev, NULL);
iavf_flow_uninit(adapter);
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index a40fed7542..d864998402 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -22,6 +22,7 @@
#include "iavf_log.h"
#include "iavf.h"
#include "iavf_generic_flow.h"
+#include "iavf_hash.h"
#define IAVF_PHINT_NONE 0
#define IAVF_PHINT_GTPU BIT_ULL(0)
@@ -77,7 +78,7 @@ static int
iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
struct rte_flow_error *error);
static void
-iavf_hash_uninit(struct iavf_adapter *ad);
+iavf_hash_uninit_parser(struct iavf_adapter *ad);
static void
iavf_hash_free(struct rte_flow *flow);
static int
@@ -680,7 +681,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
.init = iavf_hash_init,
.create = iavf_hash_create,
.destroy = iavf_hash_destroy,
- .uninit = iavf_hash_uninit,
+ .uninit = iavf_hash_uninit_parser,
.free = iavf_hash_free,
.type = IAVF_FLOW_ENGINE_HASH,
};
@@ -1641,6 +1642,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
}
static void
+iavf_hash_uninit_parser(struct iavf_adapter *ad)
+{
+ iavf_unregister_parser(&iavf_hash_parser, ad);
+}
+
+void
iavf_hash_uninit(struct iavf_adapter *ad)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
@@ -1658,8 +1665,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
PMD_DRV_LOG(ERR, "fail to delete default RSS");
-
- iavf_unregister_parser(&iavf_hash_parser, ad);
}
static void
diff --git a/drivers/net/intel/iavf/iavf_hash.h b/drivers/net/intel/iavf/iavf_hash.h
new file mode 100644
index 0000000000..2348f32673
--- /dev/null
+++ b/drivers/net/intel/iavf/iavf_hash.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Intel Corporation
+ */
+
+#ifndef _IAVF_HASH_H_
+#define _IAVF_HASH_H_
+
+#include "iavf.h"
+
+void
+iavf_hash_uninit(struct iavf_adapter *ad);
+
+#endif /* _IAVF_HASH_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 19/27] net/iavf: avoid rte malloc in RSS configuration
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (17 preceding siblings ...)
2026-02-11 13:53 ` [PATCH v3 18/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
@ 2026-02-11 13:53 ` Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
` (7 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:53 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring RSS (redirection table, lookup table, and
hash key), we are using rte_zmalloc followed by an immediate rte_free.
This is not needed as this memory is not being stored anywhere, so
replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++--
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 70eb7e7ec5..d3fa47fd5e 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1554,7 +1554,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1574,7 +1574,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = iavf_configure_rss_lut(adapter);
if (ret) /* revert back */
rte_memcpy(vf->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 9ad39300c6..55986ef909 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1159,7 +1159,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
- rss_lut = rte_zmalloc("rss_lut", len, 0);
+ rss_lut = calloc(1, len);
if (!rss_lut)
return -ENOMEM;
@@ -1178,7 +1178,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_LUT");
- rte_free(rss_lut);
+ free(rss_lut);
return err;
}
@@ -1191,7 +1191,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
- rss_key = rte_zmalloc("rss_key", len, 0);
+ rss_key = calloc(1, len);
if (!rss_key)
return -ENOMEM;
@@ -1210,7 +1210,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_KEY");
- rte_free(rss_key);
+ free(rss_key);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (18 preceding siblings ...)
2026-02-11 13:53 ` [PATCH v3 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
@ 2026-02-11 13:53 ` Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
` (6 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:53 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This is not needed as this
memory is not being stored anywhere, so replace it with regular
malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 55986ef909..19dce17612 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1402,7 +1402,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
}
}
- list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
+ list = calloc(1, len);
if (!list) {
PMD_DRV_LOG(ERR, "fail to allocate memory");
return;
@@ -1434,7 +1434,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
+ free(list);
begin = next_begin;
} while (begin < IAVF_NUM_MACADDR_MAX);
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 21/27] net/iavf: avoid rte malloc in IPsec operations
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (19 preceding siblings ...)
2026-02-11 13:53 ` [PATCH v3 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-11 13:53 ` Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
` (5 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:53 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when performing IPsec security association operations and
retrieving device capabilities, we are using rte_malloc followed by
immediate rte_free. This is not needed as these structures are only
used temporarily, so replace it with stack-allocated structures for
small fixed-size messages and regular malloc/free for variable-sized
buffers.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 58 ++++++++--------------
1 file changed, 21 insertions(+), 37 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index cb437d3212..1a3004b0fc 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -3,6 +3,7 @@
*/
#include <stdalign.h>
+#include <stdlib.h>
#include <rte_cryptodev.h>
#include <rte_ethdev.h>
@@ -899,29 +900,18 @@ static uint32_t
iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_destroy sa_destroy;
+ } req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } resp;
+ struct inline_ipsec_msg *request = &req.msg, *response = &resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_destroy);
-
- request = rte_malloc("iavf-sa-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
-
- response = rte_malloc("iavf-sa-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -939,10 +929,10 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(req),
+ (uint8_t *)response, sizeof(resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -957,10 +947,6 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
response->ipsec_data.ipsec_status->status)
rc = -EFAULT;
-update_cleanup:
- rte_free(response);
- rte_free(request);
-
return rc;
}
@@ -1113,7 +1099,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-capability-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1121,8 +1107,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-capability-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1148,8 +1133,8 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
memcpy(capability, response->ipsec_data.ipsec_cap, sizeof(*capability));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
@@ -1538,7 +1523,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-status-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1546,8 +1531,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-status-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1573,8 +1557,8 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
memcpy(status, response->ipsec_data.ipsec_status, sizeof(*status));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 22/27] net/iavf: avoid rte malloc in queue operations
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (20 preceding siblings ...)
2026-02-11 13:53 ` [PATCH v3 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
@ 2026-02-11 13:53 ` Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
` (4 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:53 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when enabling, disabling, or switching queues, we are using
rte_malloc followed by an immediate rte_free. This is not needed as these
structures are not being stored anywhere, so replace them with stack
allocation or malloc/free where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 65 ++++++++++++-----------------
1 file changed, 26 insertions(+), 39 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 19dce17612..af1f5fbfc0 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1020,19 +1020,15 @@ int
iavf_enable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1048,7 +1044,7 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_ENABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1056,7 +1052,6 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_ENABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1064,19 +1059,15 @@ int
iavf_disable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1092,7 +1083,7 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1100,7 +1091,6 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1109,17 +1099,15 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
bool rx, bool on)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req;
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = 1;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1138,7 +1126,7 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
else
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1146,7 +1134,6 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
PMD_DRV_LOG(ERR, "Failed to execute command of %s",
on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1229,7 +1216,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
size = sizeof(*vc_config) +
sizeof(vc_config->qpair[0]) * num_queue_pairs;
- vc_config = rte_zmalloc("cfg_queue", size, 0);
+ vc_config = calloc(1, size);
if (!vc_config)
return -ENOMEM;
@@ -1292,7 +1279,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
PMD_DRV_LOG(ERR, "Failed to execute command of"
" VIRTCHNL_OP_CONFIG_VSI_QUEUES");
- rte_free(vc_config);
+ free(vc_config);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 23/27] net/iavf: avoid rte malloc in irq map config
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (21 preceding siblings ...)
2026-02-11 13:53 ` [PATCH v3 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
@ 2026-02-11 13:53 ` Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (3 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:53 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring IRQ maps, we are using rte_zmalloc followed
by an immediate rte_free. This is not needed as this memory is not being
stored anywhere, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index af1f5fbfc0..d0cc8673e1 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1295,7 +1295,7 @@ iavf_config_irq_map(struct iavf_adapter *adapter)
len = sizeof(struct virtchnl_irq_map_info) +
sizeof(struct virtchnl_vector_map) * vf->nb_msix;
- map_info = rte_zmalloc("map_info", len, 0);
+ map_info = calloc(1, len);
if (!map_info)
return -ENOMEM;
@@ -1319,7 +1319,7 @@ iavf_config_irq_map(struct iavf_adapter *adapter)
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
- rte_free(map_info);
+ free(map_info);
return err;
}
@@ -1337,7 +1337,7 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
len = sizeof(struct virtchnl_queue_vector_maps) +
sizeof(struct virtchnl_queue_vector) * (num - 1);
- map_info = rte_zmalloc("map_info", len, 0);
+ map_info = calloc(1, len);
if (!map_info)
return -ENOMEM;
@@ -1360,7 +1360,7 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
- rte_free(map_info);
+ free(map_info);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 24/27] net/ice: avoid rte malloc in RSS RETA operations
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (22 preceding siblings ...)
2026-02-11 13:53 ` [PATCH v3 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
@ 2026-02-11 13:53 ` Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
` (2 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:53 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
drivers/net/intel/ice/ice_ethdev.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 81da5a4656..037382b336 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -1338,7 +1338,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1358,7 +1358,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = ice_dcf_configure_rss_lut(hw);
if (ret) /* revert back */
rte_memcpy(hw->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_ethdev.c b/drivers/net/intel/ice/ice_ethdev.c
index ade13600de..fbd7c0f2f2 100644
--- a/drivers/net/intel/ice/ice_ethdev.c
+++ b/drivers/net/intel/ice/ice_ethdev.c
@@ -5583,7 +5583,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
/* It MUST use the current LUT size to get the RSS lookup table,
* otherwise if will fail with -100 error code.
*/
- lut = rte_zmalloc(NULL, RTE_MAX(reta_size, lut_size), 0);
+ lut = calloc(1, RTE_MAX(reta_size, lut_size));
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -5607,7 +5607,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
@@ -5632,7 +5632,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc(NULL, reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -5650,7 +5650,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 25/27] net/ice: avoid rte malloc in MAC address operations
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (23 preceding siblings ...)
2026-02-11 13:53 ` [PATCH v3 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-11 13:53 ` Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:53 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This is not needed as this
memory is not being stored anywhere, so replace it with regular
calloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 037382b336..d2a7a2847b 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -936,7 +936,7 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
len = sizeof(struct virtchnl_ether_addr_list);
len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
- list = rte_zmalloc(NULL, len, 0);
+ list = calloc(1, len);
if (!list) {
PMD_DRV_LOG(ERR, "fail to allocate memory");
return -ENOMEM;
@@ -961,7 +961,7 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
+ free(list);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 26/27] net/ice: avoid rte malloc in raw pattern parsing
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (24 preceding siblings ...)
2026-02-11 13:53 ` [PATCH v3 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-11 13:53 ` Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:53 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when parsing raw flow patterns, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as this memory is
not being stored anywhere, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_fdir_filter.c | 14 +++++++-------
drivers/net/intel/ice/ice_hash.c | 10 +++++-----
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 5abdcbac7f..3db13dba96 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -1879,13 +1879,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
pkt_len)
return -rte_errno;
- tmp_spec = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_spec = calloc(1, pkt_len / 2);
if (!tmp_spec)
return -rte_errno;
- tmp_mask = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_mask = calloc(1, pkt_len / 2);
if (!tmp_mask) {
- rte_free(tmp_spec);
+ free(tmp_spec);
return -rte_errno;
}
@@ -1950,13 +1950,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
filter->parser_ena = true;
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
break;
raw_error:
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
return ret_val;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index afdc8f220a..854c6e8dca 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -676,13 +676,13 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
pkt_len = spec_len / 2;
- pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+ pkt_buf = calloc(1, pkt_len);
if (!pkt_buf)
return -ENOMEM;
- msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+ msk_buf = calloc(1, pkt_len);
if (!msk_buf) {
- rte_free(pkt_buf);
+ free(pkt_buf);
return -ENOMEM;
}
@@ -733,8 +733,8 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
rte_memcpy(&meta->raw.prof, &prof, sizeof(prof));
free_mem:
- rte_free(pkt_buf);
- rte_free(msk_buf);
+ free(pkt_buf);
+ free(msk_buf);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v3 27/27] net/ice: avoid rte malloc in flow pattern match
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (25 preceding siblings ...)
2026-02-11 13:53 ` [PATCH v3 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
@ 2026-02-11 13:53 ` Anatoly Burakov
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-11 13:53 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when allocating buffers for pattern match items and flow item
storage, we are using rte_zmalloc followed by immediate rte_free. This is
not needed as these buffers are only used temporarily within the function
scope, so replace it with regular calloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_acl_filter.c | 3 ++-
drivers/net/intel/ice/ice_fdir_filter.c | 5 +++--
drivers/net/intel/ice/ice_generic_flow.c | 15 +++++++--------
drivers/net/intel/ice/ice_hash.c | 3 ++-
drivers/net/intel/ice/ice_switch_filter.c | 5 +++--
5 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c
index 38e30a4f62..6754a40044 100644
--- a/drivers/net/intel/ice/ice_acl_filter.c
+++ b/drivers/net/intel/ice/ice_acl_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1009,7 +1010,7 @@ ice_acl_parse(struct ice_adapter *ad,
*meta = filter;
error:
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 3db13dba96..62f1257e27 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -3,6 +3,7 @@
*/
#include <stdio.h>
+#include <stdlib.h>
#include <rte_flow.h>
#include <rte_hash.h>
#include <rte_hash_crc.h>
@@ -2504,11 +2505,11 @@ ice_fdir_parse(struct ice_adapter *ad,
rte_free(filter->pkt_buf);
}
- rte_free(item);
+ free(item);
return ret;
error:
rte_free(filter->pkt_buf);
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c
index 4049157eab..3f7a9f4714 100644
--- a/drivers/net/intel/ice/ice_generic_flow.c
+++ b/drivers/net/intel/ice/ice_generic_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -2136,19 +2137,17 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
}
item_num++;
- items = rte_zmalloc("ice_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
+ items = calloc(item_num, sizeof(struct rte_flow_item));
if (!items) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
NULL, "No memory for PMD internal items.");
return NULL;
}
- pattern_match_item = rte_zmalloc("ice_pattern_match_item",
- sizeof(struct ice_pattern_match_item), 0);
+ pattern_match_item = calloc(1, sizeof(struct ice_pattern_match_item));
if (!pattern_match_item) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE,
NULL, "Failed to allocate memory.");
- rte_free(items);
+ free(items);
return NULL;
}
@@ -2167,7 +2166,7 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
pattern_match_item->pattern_list =
array[i].pattern_list;
pattern_match_item->meta = array[i].meta;
- rte_free(items);
+ free(items);
return pattern_match_item;
}
}
@@ -2175,8 +2174,8 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
unsupported:
rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- rte_free(items);
- rte_free(pattern_match_item);
+ free(items);
+ free(pattern_match_item);
return NULL;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index 854c6e8dca..1174c505da 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -1211,7 +1212,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad,
*meta = rss_meta_ptr;
else
rte_free(rss_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c
index 28bc775a2c..b25e5eaad3 100644
--- a/drivers/net/intel/ice/ice_switch_filter.c
+++ b/drivers/net/intel/ice/ice_switch_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1877,14 +1878,14 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
rte_free(sw_meta_ptr);
}
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return 0;
error:
rte_free(list);
rte_free(sw_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return -rte_errno;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* Re: [PATCH v3 01/27] net/ixgbe: remove MAC type check macros
2026-02-11 13:52 ` [PATCH v3 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-11 17:54 ` Medvedkin, Vladimir
0 siblings, 0 replies; 297+ messages in thread
From: Medvedkin, Vladimir @ 2026-02-11 17:54 UTC (permalink / raw)
To: Anatoly Burakov, dev
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
On 2/11/2026 1:52 PM, Anatoly Burakov wrote:
> The macros used were not informative and did not add any value beyond code
> golf, so remove them and make MAC type checks explicit.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
> drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
> 2 files changed, 17 insertions(+), 15 deletions(-)
>
<snip>
--
Regards,
Vladimir
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 02/27] net/ixgbe: remove security-related ifdefery
2026-02-11 13:52 ` [PATCH v3 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-11 17:56 ` Medvedkin, Vladimir
0 siblings, 0 replies; 297+ messages in thread
From: Medvedkin, Vladimir @ 2026-02-11 17:56 UTC (permalink / raw)
To: Anatoly Burakov, dev
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
On 2/11/2026 1:52 PM, Anatoly Burakov wrote:
> The security library is specified as explicit dependency for ixgbe, so
> there is no more need to gate features behind #ifdef blocks that depend
> on presence of this library.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 ------
> drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ---
> drivers/net/intel/ixgbe/ixgbe_flow.c | 6 -----
> drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --------------------
> drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 --
> drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -----
> 6 files changed, 52 deletions(-)
>
<snip>
--
Regards,
Vladimir
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 03/27] net/ixgbe: split security and ntuple filters
2026-02-11 13:52 ` [PATCH v3 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-11 19:25 ` Medvedkin, Vladimir
2026-02-12 9:42 ` Burakov, Anatoly
0 siblings, 1 reply; 297+ messages in thread
From: Medvedkin, Vladimir @ 2026-02-11 19:25 UTC (permalink / raw)
To: Anatoly Burakov, dev
On 2/11/2026 1:52 PM, Anatoly Burakov wrote:
> These filters are mashed together even though they almost do not share any
> code at all between each other. Separate security filter from ntuple filter
> and parse it separately.
>
> While we're at it, we're making checks more stringent (such as checking for
> NULL conf), and more type safe.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/ixgbe/ixgbe_flow.c | 136 ++++++++++++++++++---------
> 1 file changed, 91 insertions(+), 45 deletions(-)
>
> diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
> index c17c6c4bf6..cd8d46019f 100644
> --- a/drivers/net/intel/ixgbe/ixgbe_flow.c
> +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
> @@ -214,41 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
> memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
> memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
>
> - /**
> - * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
> - */
> - act = next_no_void_action(actions, NULL);
> - if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
> - const void *conf = act->conf;
> - /* check if the next not void item is END */
> - act = next_no_void_action(actions, act);
> - if (act->type != RTE_FLOW_ACTION_TYPE_END) {
> - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
> - rte_flow_error_set(error, EINVAL,
> - RTE_FLOW_ERROR_TYPE_ACTION,
> - act, "Not supported action.");
> - return -rte_errno;
> - }
> -
> - /* get the IP pattern*/
> - item = next_no_void_pattern(pattern, NULL);
> - while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
> - item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
> - if (item->last ||
> - item->type == RTE_FLOW_ITEM_TYPE_END) {
> - rte_flow_error_set(error, EINVAL,
> - RTE_FLOW_ERROR_TYPE_ITEM,
> - item, "IP pattern missing.");
> - return -rte_errno;
> - }
> - item = next_no_void_pattern(pattern, item);
> - }
> -
> - filter->proto = IPPROTO_ESP;
> - return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
> - item->type == RTE_FLOW_ITEM_TYPE_IPV6);
> - }
> -
> /* the first not void item can be MAC or IPv4 */
> item = next_no_void_pattern(pattern, NULL);
>
> @@ -607,6 +572,81 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
> return 0;
> }
>
> +static int
> +ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
> + const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
> + struct rte_flow_error *error)
> +{
> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + const struct rte_flow_action_security *security;
> + const struct rte_flow_item *item;
> + const struct rte_flow_action *act;
> +
> + if (hw->mac.type != ixgbe_mac_82599EB &&
> + hw->mac.type != ixgbe_mac_X540 &&
> + hw->mac.type != ixgbe_mac_X550 &&
> + hw->mac.type != ixgbe_mac_X550EM_x &&
> + hw->mac.type != ixgbe_mac_X550EM_a &&
> + hw->mac.type != ixgbe_mac_E610)
> + return -ENOTSUP;
> +
> + if (pattern == NULL) {
> + rte_flow_error_set(error,
> + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
> + NULL, "NULL pattern.");
> + return -rte_errno;
> + }
> + if (actions == NULL) {
> + rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION_NUM,
> + NULL, "NULL action.");
> + return -rte_errno;
> + }
> + if (attr == NULL) {
> + rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ATTR,
> + NULL, "NULL attribute.");
> + return -rte_errno;
> + }
> +
> + /* check if next non-void action is security */
> + act = next_no_void_action(actions, NULL);
> + if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "Not supported action.");
> + }
> + security = act->conf;
> + if (security == NULL) {
this looks like a bug fix. In a previous implementation it didn't check
if act->conf is NULL and consequent calling for
ixgbe_crypto_add_ingress_sa_from_flow() immediately dereference it.
Probably it would be great to backport this as a fix.
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION, act,
> + "NULL security action config.");
> + }
> + /* check if the next not void item is END */
> + act = next_no_void_action(actions, act);
> + if (act->type != RTE_FLOW_ACTION_TYPE_END) {
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION,
> + act, "Not supported action.");
> + }
> +
> + /* get the IP pattern*/
> + item = next_no_void_pattern(pattern, NULL);
> + while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
> + item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
> + if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
> + rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM,
> + item, "IP pattern missing.");
> + return -rte_errno;
> + }
> + item = next_no_void_pattern(pattern, item);
> + }
> +
> + return ixgbe_crypto_add_ingress_sa_from_flow(security->security_session,
did you mean &security->security_session here?
> + item->spec, item->type == RTE_FLOW_ITEM_TYPE_IPV6);
probably need to add check if item->spec is NULL
> +}
> +
> /* a specific function for ixgbe because the flags is specific */
> static int
> ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
<snip>
--
Regards,
Vladimir
^ permalink raw reply [flat|nested] 297+ messages in thread
* RE: [PATCH v3 06/27] net/i40e: use unsigned types for queue comparisons
2026-02-11 13:52 ` [PATCH v3 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-11 21:03 ` Morten Brørup
2026-02-13 10:30 ` Burakov, Anatoly
0 siblings, 1 reply; 297+ messages in thread
From: Morten Brørup @ 2026-02-11 21:03 UTC (permalink / raw)
To: Anatoly Burakov, dev, Bruce Richardson
> Currently, when we compare queue numbers against maximum traffic class
> value of 64, we do not use unsigned values, which results in compiler
> warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
> value. Make it unsigned, and adjust callers to use correct types. As a
> consequence, `i40e_align_floor` now returns unsigned value as well -
> this
> is correct, because nothing about that function implies signed usage
> being
> a valid use case.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_ethdev.c | 2 +-
> drivers/net/intel/i40e/i40e_ethdev.h | 6 +++---
> drivers/net/intel/i40e/i40e_hash.c | 4 ++--
> 3 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/intel/i40e/i40e_ethdev.c
> b/drivers/net/intel/i40e/i40e_ethdev.c
> index 2deb87b01b..d5c61cd577 100644
> --- a/drivers/net/intel/i40e/i40e_ethdev.c
> +++ b/drivers/net/intel/i40e/i40e_ethdev.c
> @@ -9058,7 +9058,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
> struct i40e_hw *hw = &pf->adapter->hw;
> uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
> uint32_t i;
> - int num;
> + size_t num;
Why not just unsigned int? size_t seems weird when not counting bytes.
Or uint16_t, considering its use.
>
> /* If both VMDQ and RSS enabled, not all of PF queues are
> * configured. It's necessary to calculate the actual PF
> diff --git a/drivers/net/intel/i40e/i40e_ethdev.h
> b/drivers/net/intel/i40e/i40e_ethdev.h
> index 0de036f2d9..d144297360 100644
> --- a/drivers/net/intel/i40e/i40e_ethdev.h
> +++ b/drivers/net/intel/i40e/i40e_ethdev.h
> @@ -24,7 +24,7 @@
> #define I40E_AQ_LEN 32
> #define I40E_AQ_BUF_SZ 4096
> /* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
> -#define I40E_MAX_Q_PER_TC 64
> +#define I40E_MAX_Q_PER_TC 64U
> #define I40E_NUM_DESC_DEFAULT 512
> #define I40E_NUM_DESC_ALIGN 32
> #define I40E_BUF_SIZE_MIN 1024
> @@ -1517,8 +1517,8 @@ i40e_init_adminq_parameter(struct i40e_hw *hw)
> hw->aq.asq_buf_size = I40E_AQ_BUF_SZ;
> }
>
> -static inline int
> -i40e_align_floor(int n)
> +static inline uint32_t
> +i40e_align_floor(uint32_t n)
> {
> if (n == 0)
> return 0;
> diff --git a/drivers/net/intel/i40e/i40e_hash.c
> b/drivers/net/intel/i40e/i40e_hash.c
> index f20b40e7d0..cbb377295d 100644
> --- a/drivers/net/intel/i40e/i40e_hash.c
> +++ b/drivers/net/intel/i40e/i40e_hash.c
> @@ -949,7 +949,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev
> *dev,
> struct i40e_pf *pf;
> struct i40e_hw *hw;
> uint16_t i;
> - int max_queue;
> + size_t max_queue;
Why not just unsigned int? size_t seems weird when not counting bytes.
Or uint16_t, like rss_act->queue[i].
But then I40E_MAX_Q_PER_TC should maybe also be defined as UINT16_C(64), and maybe more should be uint16_t too.
>
> hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> if (!rss_act->queue_num ||
> @@ -971,7 +971,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev
> *dev,
> max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
>
> for (i = 0; i < rss_act->queue_num; i++) {
> - if ((int)rss_act->queue[i] >= max_queue)
> + if (rss_act->queue[i] >= max_queue)
> break;
> }
>
> --
> 2.47.3
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 03/27] net/ixgbe: split security and ntuple filters
2026-02-11 19:25 ` Medvedkin, Vladimir
@ 2026-02-12 9:42 ` Burakov, Anatoly
0 siblings, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-12 9:42 UTC (permalink / raw)
To: Medvedkin, Vladimir, dev
On 2/11/2026 8:25 PM, Medvedkin, Vladimir wrote:
>
> On 2/11/2026 1:52 PM, Anatoly Burakov wrote:
>> These filters are mashed together even though they almost do not share
>> any
>> code at all between each other. Separate security filter from ntuple
>> filter
>> and parse it separately.
>>
>> While we're at it, we're making checks more stringent (such as
>> checking for
>> NULL conf), and more type safe.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>> drivers/net/intel/ixgbe/ixgbe_flow.c | 136 ++++++++++++++++++---------
>> 1 file changed, 91 insertions(+), 45 deletions(-)
>>
>> diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/
>> ixgbe/ixgbe_flow.c
>> index c17c6c4bf6..cd8d46019f 100644
>> --- a/drivers/net/intel/ixgbe/ixgbe_flow.c
>> +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
>> @@ -214,41 +214,6 @@ cons_parse_ntuple_filter(const struct
>> rte_flow_attr *attr,
>> memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
>> memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
>> - /**
>> - * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
>> - */
>> - act = next_no_void_action(actions, NULL);
>> - if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
>> - const void *conf = act->conf;
>> - /* check if the next not void item is END */
>> - act = next_no_void_action(actions, act);
>> - if (act->type != RTE_FLOW_ACTION_TYPE_END) {
>> - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
>> - rte_flow_error_set(error, EINVAL,
>> - RTE_FLOW_ERROR_TYPE_ACTION,
>> - act, "Not supported action.");
>> - return -rte_errno;
>> - }
>> -
>> - /* get the IP pattern*/
>> - item = next_no_void_pattern(pattern, NULL);
>> - while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
>> - item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
>> - if (item->last ||
>> - item->type == RTE_FLOW_ITEM_TYPE_END) {
>> - rte_flow_error_set(error, EINVAL,
>> - RTE_FLOW_ERROR_TYPE_ITEM,
>> - item, "IP pattern missing.");
>> - return -rte_errno;
>> - }
>> - item = next_no_void_pattern(pattern, item);
>> - }
>> -
>> - filter->proto = IPPROTO_ESP;
>> - return ixgbe_crypto_add_ingress_sa_from_flow(conf, item->spec,
>> - item->type == RTE_FLOW_ITEM_TYPE_IPV6);
>> - }
>> -
>> /* the first not void item can be MAC or IPv4 */
>> item = next_no_void_pattern(pattern, NULL);
>> @@ -607,6 +572,81 @@ cons_parse_ntuple_filter(const struct
>> rte_flow_attr *attr,
>> return 0;
>> }
>> +static int
>> +ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct
>> rte_flow_attr *attr,
>> + const struct rte_flow_item pattern[], const struct
>> rte_flow_action actions[],
>> + struct rte_flow_error *error)
>> +{
>> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data-
>> >dev_private);
>> + const struct rte_flow_action_security *security;
>> + const struct rte_flow_item *item;
>> + const struct rte_flow_action *act;
>> +
>> + if (hw->mac.type != ixgbe_mac_82599EB &&
>> + hw->mac.type != ixgbe_mac_X540 &&
>> + hw->mac.type != ixgbe_mac_X550 &&
>> + hw->mac.type != ixgbe_mac_X550EM_x &&
>> + hw->mac.type != ixgbe_mac_X550EM_a &&
>> + hw->mac.type != ixgbe_mac_E610)
>> + return -ENOTSUP;
>> +
>> + if (pattern == NULL) {
>> + rte_flow_error_set(error,
>> + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
>> + NULL, "NULL pattern.");
>> + return -rte_errno;
>> + }
>> + if (actions == NULL) {
>> + rte_flow_error_set(error, EINVAL,
>> + RTE_FLOW_ERROR_TYPE_ACTION_NUM,
>> + NULL, "NULL action.");
>> + return -rte_errno;
>> + }
>> + if (attr == NULL) {
>> + rte_flow_error_set(error, EINVAL,
>> + RTE_FLOW_ERROR_TYPE_ATTR,
>> + NULL, "NULL attribute.");
>> + return -rte_errno;
>> + }
>> +
>> + /* check if next non-void action is security */
>> + act = next_no_void_action(actions, NULL);
>> + if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
>> + return rte_flow_error_set(error, EINVAL,
>> + RTE_FLOW_ERROR_TYPE_ACTION,
>> + act, "Not supported action.");
>> + }
>> + security = act->conf;
>> + if (security == NULL) {
> this looks like a bug fix. In a previous implementation it didn't check
> if act->conf is NULL and consequent calling for
> ixgbe_crypto_add_ingress_sa_from_flow() immediately dereference it.
> Probably it would be great to backport this as a fix.
>> + return rte_flow_error_set(error, EINVAL,
>> + RTE_FLOW_ERROR_TYPE_ACTION, act,
>> + "NULL security action config.");
>> + }
>> + /* check if the next not void item is END */
>> + act = next_no_void_action(actions, act);
>> + if (act->type != RTE_FLOW_ACTION_TYPE_END) {
>> + return rte_flow_error_set(error, EINVAL,
>> + RTE_FLOW_ERROR_TYPE_ACTION,
>> + act, "Not supported action.");
>> + }
>> +
>> + /* get the IP pattern*/
>> + item = next_no_void_pattern(pattern, NULL);
>> + while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
>> + item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
>> + if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
>> + rte_flow_error_set(error, EINVAL,
>> + RTE_FLOW_ERROR_TYPE_ITEM,
>> + item, "IP pattern missing.");
>> + return -rte_errno;
>> + }
>> + item = next_no_void_pattern(pattern, item);
>> + }
>> +
>> + return ixgbe_crypto_add_ingress_sa_from_flow(security-
>> >security_session,
> did you mean &security->security_session here?
No, this should be correct.
However, I now suspect the original code was a bug too.
Here's what the original code said:
```
return ixgbe_crypto_add_ingress_sa_from_flow(conf, ...);
```
The `conf` pointer points to `action->conf`, which in case of
RTE_FLOW_ACTION_SECURITY translates to `struct rte_flow_action_security`
structure.
However, the actual function we passed this to was doing this:
```
struct ixgbe_crypto_session *ic_session = (void *)(uintptr_t)((const
struct rte_security_session *)sess)->driver_priv_data;
```
where `sess` is the `conf`, but it is treated as pointer to
`rte_security_session` not `rte_flow_action_security`. So, this is
actually a bug too, and it kinda feels like this never worked because
this code has been here since the very beginning...
There's also no reason why *any* of this has to work with void pointers
when this is the only user of this function, so I'll rewrite it and
include it in the bugfix patchset, so that this patch arrives clean.
>> + item->spec, item->type == RTE_FLOW_ITEM_TYPE_IPV6);
> probably need to add check if item->spec is NULL
Yes, thanks for pointing that out. Will fix in bugfix patchset as well.
>> +}
>> +
>> /* a specific function for ixgbe because the flags is specific */
>> static int
>> ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
> <snip>
>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (13 preceding siblings ...)
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
` (26 more replies)
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (4 subsequent siblings)
19 siblings, 27 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev
This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
IXGBE:
- Remove unnecessary macros and #ifdef's
- Disentangle unrelated flow API code paths
I40E:
- Get rid of global variables and unnecessary allocations
- Reduce code duplication around default RSS keys
- Use more appropriate integer types and definitions
IAVF:
- Remove dead code
- Remove unnecessary allocations
- Separate RSS uninit from hash flow parser uninit
ICE:
- Remove unnecessary allocations
This is done in preparation for further rework.
Note that this patchset depends on driver bug fix patchset [1] (already integrated into next-net-intel).
[1] https://patches.dpdk.org/project/dpdk/list/?series=37350
v1 -> v2:
- Added more cleanups around rte_malloc usage
v2 -> v3:
- Reworded some commit messages
- Added a new patch for ICE
- Rebased on latest bug fix patches
v3 -> v4:
- Rebased on latest bugfix patchset
Anatoly Burakov (27):
net/ixgbe: remove MAC type check macros
net/ixgbe: remove security-related ifdefery
net/ixgbe: split security and ntuple filters
net/i40e: get rid of global filter variables
net/i40e: make default RSS key global
net/i40e: use unsigned types for queue comparisons
net/i40e: use proper flex len define
net/i40e: remove global pattern variable
net/i40e: avoid rte malloc in tunnel set
net/i40e: avoid rte malloc in RSS RETA operations
net/i40e: avoid rte malloc in MAC/VLAN filtering
net/i40e: avoid rte malloc in VF resource queries
net/i40e: avoid rte malloc in adminq operations
net/i40e: avoid rte malloc in DDP package handling
net/i40e: avoid rte malloc in DDP ptype handling
net/iavf: remove remnants of pipeline mode
net/iavf: avoid rte malloc in VF mailbox for IPsec
net/iavf: decouple hash uninit from parser uninit
net/iavf: avoid rte malloc in RSS configuration
net/iavf: avoid rte malloc in MAC address operations
net/iavf: avoid rte malloc in IPsec operations
net/iavf: avoid rte malloc in queue operations
net/iavf: avoid rte malloc in irq map config
net/ice: avoid rte malloc in RSS RETA operations
net/ice: avoid rte malloc in MAC address operations
net/ice: avoid rte malloc in raw pattern parsing
net/ice: avoid rte malloc in flow pattern match
drivers/net/intel/i40e/i40e_ethdev.c | 223 +++++++++----------
drivers/net/intel/i40e/i40e_ethdev.h | 24 +-
drivers/net/intel/i40e/i40e_flow.c | 146 ++++++------
drivers/net/intel/i40e/i40e_hash.c | 27 ++-
drivers/net/intel/i40e/i40e_hash.h | 3 +
drivers/net/intel/i40e/i40e_pf.c | 26 +--
drivers/net/intel/i40e/rte_pmd_i40e.c | 59 ++---
drivers/net/intel/iavf/iavf_ethdev.c | 8 +-
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 --
drivers/net/intel/iavf/iavf_hash.c | 14 +-
drivers/net/intel/iavf/iavf_hash.h | 13 ++
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 216 ++++++------------
drivers/net/intel/iavf/iavf_vchnl.c | 85 +++----
drivers/net/intel/ice/ice_acl_filter.c | 3 +-
drivers/net/intel/ice/ice_dcf_ethdev.c | 8 +-
drivers/net/intel/ice/ice_ethdev.c | 8 +-
drivers/net/intel/ice/ice_fdir_filter.c | 19 +-
drivers/net/intel/ice/ice_generic_flow.c | 15 +-
drivers/net/intel/ice/ice_hash.c | 13 +-
drivers/net/intel/ice/ice_switch_filter.c | 5 +-
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 -
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 16 --
drivers/net/intel/ixgbe/ixgbe_flow.c | 220 ++++++++++--------
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 ---
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 -
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -
28 files changed, 536 insertions(+), 674 deletions(-)
create mode 100644 drivers/net/intel/iavf/iavf_hash.h
--
2.47.3
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v4 01/27] net/ixgbe: remove MAC type check macros
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:44 ` Burakov, Anatoly
2026-02-16 16:58 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
` (25 subsequent siblings)
26 siblings, 2 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The macros used were not informative and did not add any value beyond code
golf, so remove them and make MAC type checks explicit.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
2 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 5dbd659941..7dc02a472b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -137,18 +137,6 @@
#define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
#define IXGBE_MAX_L2_TN_FILTER_NUM 128
-#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
- return -ENOTSUP;\
-} while (0)
-
-#define MAC_TYPE_FILTER_SUP(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
- (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
- (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
- return -ENOTSUP;\
-} while (0)
-
/* Link speed for X550 auto negotiation */
#define IXGBE_LINK_SPEED_X550_AUTONEG (IXGBE_LINK_SPEED_100_FULL | \
IXGBE_LINK_SPEED_1GB_FULL | \
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 6a7edc6377..c8d6237f27 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -654,7 +654,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP_EXT(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540)
+ return -ENOTSUP;
ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
@@ -894,7 +896,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_ethertype_filter(attr, pattern,
actions, filter, error);
@@ -1183,7 +1191,13 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_syn_filter(attr, pattern,
actions, filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 02/27] net/ixgbe: remove security-related ifdefery
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:44 ` Burakov, Anatoly
2026-02-13 10:26 ` [PATCH v4 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
` (24 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The security library is specified as explicit dependency for ixgbe, so
there is no more need to gate features behind #ifdef blocks that depend
on presence of this library.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 ------
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ---
drivers/net/intel/ixgbe/ixgbe_flow.c | 6 -----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --------------------
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 --
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -----
6 files changed, 52 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 11500a923c..57d929cf2c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -32,9 +32,7 @@
#include <rte_random.h>
#include <dev_driver.h>
#include <rte_hash_crc.h>
-#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
-#endif
#include <rte_os_shim.h>
#include "ixgbe_logs.h"
@@ -1177,11 +1175,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ixgbe_swfw_lock_reset(hw);
-#ifdef RTE_LIB_SECURITY
/* Initialize security_ctx only for primary process*/
if (ixgbe_ipsec_ctx_create(eth_dev))
return -ENOMEM;
-#endif
/* Initialize DCB configuration*/
memset(dcb_config, 0, sizeof(struct ixgbe_dcb_config));
@@ -1362,10 +1358,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
rte_free(eth_dev->data->hash_mac_addrs);
eth_dev->data->hash_mac_addrs = NULL;
err_exit:
-#ifdef RTE_LIB_SECURITY
rte_free(eth_dev->security_ctx);
eth_dev->security_ctx = NULL;
-#endif
return ret;
}
@@ -3148,10 +3142,8 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(dev);
-#ifdef RTE_LIB_SECURITY
rte_free(dev->security_ctx);
dev->security_ctx = NULL;
-#endif
return ret;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 7dc02a472b..32d7b98ed1 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -14,9 +14,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
-#ifdef RTE_LIB_SECURITY
#include "ixgbe_ipsec.h"
-#endif
#include <rte_flow.h>
#include <rte_time.h>
#include <rte_hash.h>
@@ -480,9 +478,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-#ifdef RTE_LIB_SECURITY
struct ixgbe_ipsec ipsec;
-#endif
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index c8d6237f27..491e8bccc5 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,7 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
-#ifdef RTE_LIB_SECURITY
/**
* Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
*/
@@ -282,7 +281,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
}
-#endif
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -663,11 +661,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (filter->proto == IPPROTO_ESP)
return 0;
-#endif
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
@@ -3107,7 +3103,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (ntuple_filter.proto == IPPROTO_ESP) {
if (ret != 0)
@@ -3115,7 +3110,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
flow->is_security = true;
return flow;
}
-#endif
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
index 2857c19355..71deda9ed6 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
@@ -459,7 +459,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
-#ifdef RTE_LIB_SECURITY
if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
union ixgbe_crypto_tx_desc_md *md =
(union ixgbe_crypto_tx_desc_md *)mdata;
@@ -473,7 +472,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
tx_offload_mask.sa_idx |= ~0;
tx_offload_mask.sec_pad_len |= ~0;
}
-#endif
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -652,9 +650,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec;
-#endif
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -682,9 +678,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
-#ifdef RTE_LIB_SECURITY
use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
-#endif
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -696,7 +690,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
-#ifdef RTE_LIB_SECURITY
if (use_ipsec) {
union ixgbe_crypto_tx_desc_md *ipsec_mdata =
(union ixgbe_crypto_tx_desc_md *)
@@ -704,7 +697,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.sa_idx = ipsec_mdata->sa_idx;
tx_offload.sec_pad_len = ipsec_mdata->pad_len;
}
-#endif
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -896,10 +888,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
-#ifdef RTE_LIB_SECURITY
if (use_ipsec)
olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
-#endif
m_seg = tx_pkt;
do {
@@ -1523,13 +1513,11 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
}
-#ifdef RTE_LIB_SECURITY
if (rx_status & IXGBE_RXD_STAT_SECP) {
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
}
-#endif
return pkt_flags;
}
@@ -2490,9 +2478,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST) {
if (txq->tx_rs_thresh <= IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
@@ -2648,9 +2634,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
(txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = rte_eth_tx_pkt_prepare_dummy;
@@ -2711,10 +2695,8 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
-#endif
return tx_offload_capa;
}
@@ -2884,10 +2866,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
-#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
RTE_ETH_TX_OFFLOAD_SECURITY);
-#endif
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -3171,10 +3151,8 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
-#endif
return offloads;
}
@@ -5102,10 +5080,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ci_rx_queue *rxq = dev->data->rx_queues[i];
rxq->vector_rx = rx_using_sse;
-#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY);
-#endif
}
}
@@ -5611,7 +5587,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
ixgbe_setup_loopback_link_x540_x550(hw, true);
}
-#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
@@ -5624,7 +5599,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
return ret;
}
}
-#endif
return 0;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.h b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
index 7950e56ee4..33023a3580 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
@@ -99,11 +99,9 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
-#ifdef RTE_LIB_SECURITY
/* inline ipsec related*/
uint64_t sa_idx:8; /**< TX SA database entry index */
uint64_t sec_pad_len:4; /**< padding length */
-#endif
};
};
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
index dca3a20ca0..3f37038e5c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -21,7 +21,6 @@ ixgbe_rxq_rearm(struct ci_rx_queue *rxq)
ci_rxq_rearm(rxq, CI_RX_VEC_LEVEL_SSE);
}
-#ifdef RTE_LIB_SECURITY
static inline void
desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
@@ -56,7 +55,6 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
*rearm2 = _mm_extract_epi32(rearm, 2);
*rearm3 = _mm_extract_epi32(rearm, 3);
}
-#endif
static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
@@ -265,9 +263,7 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ci_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec = rxq->using_ipsec;
-#endif
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -444,10 +440,8 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
desc_to_olflags_v(descs, mbuf_init, vlan_flags, udp_p_flag,
&rx_pkts[pos]);
-#ifdef RTE_LIB_SECURITY
if (unlikely(use_ipsec))
desc_to_olflags_v_ipsec(descs, &rx_pkts[pos]);
-#endif
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 03/27] net/ixgbe: split security and ntuple filters
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
` (23 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
These filters are mashed together even though they almost do not share any
code at all between each other. Separate security filter from ntuple filter
and parse it separately.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 194 ++++++++++++++++-----------
1 file changed, 114 insertions(+), 80 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 491e8bccc5..78f40b5c37 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,74 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- act = next_no_void_action(actions, NULL);
- if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- const void *conf = act->conf;
- const struct rte_flow_action_security *sec_act;
- struct rte_security_session *session;
- struct ip_spec spec;
-
- if (conf == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- act, "NULL security conf.");
- return -rte_errno;
- }
- /* check if the next not void item is END */
- act = next_no_void_action(actions, act);
- if (act->type != RTE_FLOW_ACTION_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "Not supported action.");
- return -rte_errno;
- }
-
- /* get the IP pattern*/
- item = next_no_void_pattern(pattern, NULL);
- while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- if (item->last ||
- item->type == RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "IP pattern missing.");
- return -rte_errno;
- }
- item = next_no_void_pattern(pattern, item);
- }
- if (item->spec == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
- "NULL IP pattern.");
- return -rte_errno;
- }
-
- filter->proto = IPPROTO_ESP;
- sec_act = (const struct rte_flow_action_security *)conf;
- spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
- if (spec.is_ipv6) {
- const struct rte_flow_item_ipv6 *ipv6 = item->spec;
- spec.spec.ipv6 = *ipv6;
- } else {
- const struct rte_flow_item_ipv4 *ipv4 = item->spec;
- spec.spec.ipv4 = *ipv4;
- }
-
- /*
- * we get pointer to security session from security action,
- * which is const. however, we do need to act on the session, so
- * either we do some kind of pointer based lookup to get session
- * pointer internally (which quickly gets unwieldy for lots of
- * flows case), or we simply cast away constness.
- */
- session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
- return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
- }
-
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -640,6 +572,104 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return 0;
}
+static int
+ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct rte_flow_action_security *security;
+ struct rte_security_session *session;
+ const struct rte_flow_item *item;
+ const struct rte_flow_action *act;
+ struct ip_spec spec;
+
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error,
+ EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "NULL pattern.");
+ return -rte_errno;
+ }
+ if (actions == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "NULL action.");
+ return -rte_errno;
+ }
+ if (attr == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR,
+ NULL, "NULL attribute.");
+ return -rte_errno;
+ }
+
+ /* check if next non-void action is security */
+ act = next_no_void_action(actions, NULL);
+ if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+ security = act->conf;
+ if (security == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "NULL security action config.");
+ }
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+ if (item->spec == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
+ "NULL IP pattern.");
+ return -rte_errno;
+ }
+ spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
+ if (spec.is_ipv6) {
+ const struct rte_flow_item_ipv6 *ipv6 = item->spec;
+ spec.spec.ipv6 = *ipv6;
+ } else {
+ const struct rte_flow_item_ipv4 *ipv4 = item->spec;
+ spec.spec.ipv4 = *ipv4;
+ }
+
+ /*
+ * we get pointer to security session from security action,
+ * which is const. however, we do need to act on the session, so
+ * either we do some kind of pointer based lookup to get session
+ * pointer internally (which quickly gets unwieldy for lots of
+ * flows case), or we simply cast away constness.
+ */
+ session = RTE_CAST_PTR(struct rte_security_session *, security->security_session);
+ return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
+}
+
/* a specific function for ixgbe because the flags is specific */
static int
ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
@@ -661,10 +691,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- /* ESP flow not really a flow*/
- if (filter->proto == IPPROTO_ESP)
- return 0;
-
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -3099,18 +3125,19 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
-
- /* ESP flow not really a flow*/
- if (ntuple_filter.proto == IPPROTO_ESP) {
- if (ret != 0)
- goto out;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret) {
flow->is_security = true;
return flow;
}
+ memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
+ actions, &ntuple_filter, error);
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
@@ -3334,6 +3361,13 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
+ return 0;
+
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 04/27] net/i40e: get rid of global filter variables
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (2 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 05/27] net/i40e: make default RSS key global Anatoly Burakov
` (22 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, i40e driver relies on global state to work around the fact that
`rte_flow_validate()` is being called directly from `rte_flow_create()`,
and it not being possible to pass state between two functions. Fix that by
making a small wrapper around validation that will create a dummy context.
Additionally, tunnel filter doesn't appear to be used by anything and so is
omitted from the structure.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 16 ++--
drivers/net/intel/i40e/i40e_flow.c | 117 ++++++++++++++-------------
2 files changed, 68 insertions(+), 65 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index cab6d7e9dc..0de036f2d9 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1304,12 +1304,14 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
-union i40e_filter_t {
- struct rte_eth_ethertype_filter ethertype_filter;
- struct i40e_fdir_filter_conf fdir_filter;
- struct rte_eth_tunnel_filter_conf tunnel_filter;
- struct i40e_tunnel_filter_conf consistent_tunnel_filter;
- struct i40e_rte_flow_rss_conf rss_conf;
+struct i40e_filter_ctx {
+ union {
+ struct rte_eth_ethertype_filter ethertype_filter;
+ struct i40e_fdir_filter_conf fdir_filter;
+ struct i40e_tunnel_filter_conf consistent_tunnel_filter;
+ struct i40e_rte_flow_rss_conf rss_conf;
+ };
+ enum rte_filter_type type;
};
typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
@@ -1317,7 +1319,7 @@ typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
struct i40e_valid_pattern {
enum rte_flow_item_type *items;
parse_filter_t parse_filter;
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2374b9bbca..e611de0c06 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -80,37 +80,37 @@ static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
struct i40e_ethertype_filter *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
@@ -124,7 +124,7 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int
i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
@@ -136,7 +136,7 @@ static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
const struct rte_flow_ops i40e_flow_ops = {
.validate = i40e_flow_validate,
.create = i40e_flow_create,
@@ -145,8 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-static union i40e_filter_t cons_filter;
-static enum rte_filter_type cons_filter_type = RTE_ETH_FILTER_NONE;
/* internal pattern w/o VOID items */
struct rte_flow_item g_items[32];
@@ -1454,10 +1452,9 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct rte_eth_ethertype_filter *ethertype_filter =
- &filter->ethertype_filter;
+ struct rte_eth_ethertype_filter *ethertype_filter = &filter->ethertype_filter;
int ret;
ret = i40e_flow_parse_ethertype_pattern(dev, pattern, error,
@@ -1474,7 +1471,7 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_ETHERTYPE;
+ filter->type = RTE_ETH_FILTER_ETHERTYPE;
return ret;
}
@@ -2549,7 +2546,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
struct i40e_fdir_filter_conf *fdir_filter = &filter->fdir_filter;
int ret;
@@ -2566,7 +2563,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_FDIR;
+ filter->type = RTE_ETH_FILTER_FDIR;
return 0;
}
@@ -2834,10 +2831,9 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_l4_pattern(pattern, error, tunnel_filter);
@@ -2852,7 +2848,7 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3086,10 +3082,9 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_vxlan_pattern(dev, pattern,
@@ -3105,7 +3100,7 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3338,10 +3333,9 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_nvgre_pattern(dev, pattern,
@@ -3357,7 +3351,7 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3495,10 +3489,9 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_mpls_pattern(dev, pattern,
@@ -3514,7 +3507,7 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3648,10 +3641,9 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_gtp_pattern(dev, pattern,
@@ -3667,7 +3659,7 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3766,10 +3758,9 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_qinq_pattern(dev, pattern,
@@ -3785,16 +3776,17 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
static int
-i40e_flow_validate(struct rte_eth_dev *dev,
+i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
+ struct i40e_filter_ctx *filter_ctx,
struct rte_flow_error *error)
{
struct rte_flow_item *items; /* internal pattern w/o VOID items */
@@ -3823,7 +3815,6 @@ i40e_flow_validate(struct rte_eth_dev *dev,
NULL, "NULL attribute.");
return -rte_errno;
}
- memset(&cons_filter, 0, sizeof(cons_filter));
/* Get the non-void item of action */
while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID)
@@ -3834,9 +3825,8 @@ i40e_flow_validate(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_HASH;
- return i40e_hash_parse(dev, pattern, actions + i,
- &cons_filter.rss_conf, error);
+ filter_ctx->type = RTE_ETH_FILTER_HASH;
+ return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error);
}
i = 0;
@@ -3878,8 +3868,7 @@ i40e_flow_validate(struct rte_eth_dev *dev,
}
if (parse_filter)
- ret = parse_filter(dev, attr, items, actions,
- error, &cons_filter);
+ ret = parse_filter(dev, attr, items, actions, error, filter_ctx);
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
@@ -3890,6 +3879,19 @@ i40e_flow_validate(struct rte_eth_dev *dev,
return ret;
}
+static int
+i40e_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ /* creates dummy context */
+ struct i40e_filter_ctx filter_ctx = {0};
+
+ return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
+}
+
static struct rte_flow *
i40e_flow_create(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -3898,15 +3900,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_filter_ctx filter_ctx = {0};
struct rte_flow *flow = NULL;
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
- ret = i40e_flow_validate(dev, attr, pattern, actions, error);
+ ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
if (ret < 0)
return NULL;
- if (cons_filter_type == RTE_ETH_FILTER_FDIR) {
+ if (filter_ctx.type == RTE_ETH_FILTER_FDIR) {
/* if this is the first time we're creating an fdir flow */
if (pf->fdir.fdir_vsi == NULL) {
ret = i40e_fdir_setup(pf);
@@ -3947,18 +3950,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
}
- switch (cons_filter_type) {
+ switch (filter_ctx.type) {
case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_set(pf,
- &cons_filter.ethertype_filter, 1);
+ ret = i40e_ethertype_filter_set(pf, &filter_ctx.ethertype_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->ethertype.ethertype_list,
i40e_ethertype_filter_list);
break;
case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev,
- &cons_filter.fdir_filter, 1);
+ ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
@@ -3966,14 +3967,14 @@ i40e_flow_create(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_dev_consistent_tunnel_filter_set(pf,
- &cons_filter.consistent_tunnel_filter, 1);
+ &filter_ctx.consistent_tunnel_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->tunnel.tunnel_list,
i40e_tunnel_filter_list);
break;
case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_create(pf, &cons_filter.rss_conf);
+ ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->rss_config_list,
@@ -3983,7 +3984,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
goto free_flow;
}
- flow->filter_type = cons_filter_type;
+ flow->filter_type = filter_ctx.type;
TAILQ_INSERT_TAIL(&pf->flow_list, flow, node);
return flow;
@@ -3992,7 +3993,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
"Failed to create flow.");
- if (cons_filter_type != RTE_ETH_FILTER_FDIR)
+ if (filter_ctx.type != RTE_ETH_FILTER_FDIR)
rte_free(flow);
else
i40e_fdir_entry_pool_put(fdir_info, flow);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 05/27] net/i40e: make default RSS key global
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (3 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
` (21 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, there are multiple places where we need a default RSS key, but
in each of those places we define it as a local variable. Make it global
constant, and adjust all callers to use the global constant. When dealing
with adminq, we cannot send down the constant because adminq commands do
not guarantee const-ness, so copy RSS key into a local buffer before
sending it down to hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 22 ++++++++++------------
drivers/net/intel/i40e/i40e_hash.c | 23 +++++++++++++++++------
drivers/net/intel/i40e/i40e_hash.h | 3 +++
3 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index c8153f3351..2deb87b01b 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9082,23 +9082,21 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
int
i40e_pf_reset_rss_key(struct i40e_pf *pf)
{
- const uint8_t key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- uint8_t *rss_key;
+ uint8_t key_buf[I40E_RSS_KEY_LEN];
+ const uint8_t *rss_key;
/* Reset key */
rss_key = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key;
- if (!rss_key ||
- pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < key_len) {
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+ if (!rss_key || pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < sizeof(key_buf))
+ rss_key = i40e_rss_key_default;
- rss_key = (uint8_t *)rss_key_default;
- }
+ /*
+ * adminq does not guarantee const-ness of RSS key once a command is sent down, so make a
+ * local copy.
+ */
+ memcpy(&key_buf, rss_key, sizeof(key_buf));
- return i40e_set_rss_key(pf->main_vsi, rss_key, key_len);
+ return i40e_set_rss_key(pf->main_vsi, key_buf, sizeof(key_buf));
}
static int
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index 3149682197..f20b40e7d0 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -233,6 +233,22 @@ struct i40e_hash_match_pattern {
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+const uint8_t i40e_rss_key_default[] = {
+ 0x44, 0x39, 0x79, 0x6b,
+ 0xb5, 0x4c, 0x50, 0x23,
+ 0xb6, 0x75, 0xea, 0x5b,
+ 0x12, 0x4f, 0x9f, 0x30,
+ 0xb8, 0xa2, 0xc0, 0x3d,
+ 0xdf, 0xdc, 0x4d, 0x02,
+ 0xa0, 0x8c, 0x9b, 0x33,
+ 0x4a, 0xf6, 0x4a, 0x4c,
+ 0x05, 0xc6, 0xfa, 0x34,
+ 0x39, 0x58, 0xd8, 0x55,
+ 0x7d, 0x99, 0x58, 0x3a,
+ 0xe1, 0x38, 0xc9, 0x2e,
+ 0x81, 0x15, 0x03, 0x66
+};
+
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
*/
@@ -910,17 +926,12 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act,
const uint8_t *key = rss_act->key;
if (!key || rss_act->key_len != sizeof(rss_conf->key)) {
- const uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
-
if (rss_act->key_len != sizeof(rss_conf->key))
PMD_DRV_LOG(WARNING,
"RSS key length invalid, must be %u bytes, now set key to default",
(uint32_t)sizeof(rss_conf->key));
- memcpy(rss_conf->key, rss_key_default, sizeof(rss_conf->key));
+ memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key));
} else {
memcpy(rss_conf->key, key, sizeof(rss_conf->key));
}
diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h
index ff8c91c030..2513d84565 100644
--- a/drivers/net/intel/i40e/i40e_hash.h
+++ b/drivers/net/intel/i40e/i40e_hash.h
@@ -27,6 +27,9 @@ int i40e_hash_filter_destroy(struct i40e_pf *pf,
const struct i40e_rss_filter *rss_filter);
int i40e_hash_filter_flush(struct i40e_pf *pf);
+#define I40E_RSS_KEY_LEN ((I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t))
+extern const uint8_t i40e_rss_key_default[I40E_RSS_KEY_LEN];
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 06/27] net/i40e: use unsigned types for queue comparisons
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (4 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 05/27] net/i40e: make default RSS key global Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 07/27] net/i40e: use proper flex len define Anatoly Burakov
` (20 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when we compare queue numbers against maximum traffic class
value of 64, we do not use unsigned values, which results in compiler
warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
value. Make it unsigned, and adjust callers to use correct types. As a
consequence, `i40e_align_floor` now returns unsigned value as well - this
is correct, because nothing about that function implies signed usage being
a valid use case.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 2 +-
drivers/net/intel/i40e/i40e_ethdev.h | 6 +++---
drivers/net/intel/i40e/i40e_hash.c | 4 ++--
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 2deb87b01b..d5c61cd577 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9058,7 +9058,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
struct i40e_hw *hw = &pf->adapter->hw;
uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
- int num;
+ size_t num;
/* If both VMDQ and RSS enabled, not all of PF queues are
* configured. It's necessary to calculate the actual PF
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 0de036f2d9..d144297360 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -24,7 +24,7 @@
#define I40E_AQ_LEN 32
#define I40E_AQ_BUF_SZ 4096
/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
-#define I40E_MAX_Q_PER_TC 64
+#define I40E_MAX_Q_PER_TC 64U
#define I40E_NUM_DESC_DEFAULT 512
#define I40E_NUM_DESC_ALIGN 32
#define I40E_BUF_SIZE_MIN 1024
@@ -1517,8 +1517,8 @@ i40e_init_adminq_parameter(struct i40e_hw *hw)
hw->aq.asq_buf_size = I40E_AQ_BUF_SZ;
}
-static inline int
-i40e_align_floor(int n)
+static inline uint32_t
+i40e_align_floor(uint32_t n)
{
if (n == 0)
return 0;
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index f20b40e7d0..cbb377295d 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -949,7 +949,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
struct i40e_pf *pf;
struct i40e_hw *hw;
uint16_t i;
- int max_queue;
+ size_t max_queue;
hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (!rss_act->queue_num ||
@@ -971,7 +971,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
for (i = 0; i < rss_act->queue_num; i++) {
- if ((int)rss_act->queue[i] >= max_queue)
+ if (rss_act->queue[i] >= max_queue)
break;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 07/27] net/i40e: use proper flex len define
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (5 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 08/27] net/i40e: remove global pattern variable Anatoly Burakov
` (19 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
For FDIR, we have byte arrays that are supposed to be limited to whatever
the HW supports in terms of flex descriptor matching. However, in the
structure definition, spec and mask bytes are using different array length
defines, and the only reason why this works is because they evaluate to the
same value.
Use the i40e-specific definition instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index d144297360..025901edb6 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -631,7 +631,7 @@ struct i40e_fdir_flex_pit {
/* A structure used to contain extend input of flow */
struct i40e_fdir_flow_ext {
uint16_t vlan_tci;
- uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
+ uint8_t flexbytes[I40E_FDIR_MAX_FLEX_LEN];
/* It is filled by the flexible payload to match. */
uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN];
uint8_t raw_id;
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 08/27] net/i40e: remove global pattern variable
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (6 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 07/27] net/i40e: use proper flex len define Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
` (18 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
When parsing flow patterns, current code cleans up the pattern list by
removing void flow items, and copies the patterns into an array. That
array, when dealing with under 32 flow items, is allocated on the stack,
but when the pattern is big enough, a new list is dynamically allocated.
This allocated list is a global variable, and is allocated using
rte_zmalloc call which seems like overkill for this use case.
Remove the global variable, and replace the split behavior with
unconditional allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 29 +++++++++--------------------
1 file changed, 9 insertions(+), 20 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index e611de0c06..c5bb787f28 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -145,9 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-/* internal pattern w/o VOID items */
-struct rte_flow_item g_items[32];
-
/* Pattern matched ethertype filter */
static enum rte_flow_item_type pattern_ethertype[] = {
RTE_FLOW_ITEM_TYPE_ETH,
@@ -3837,19 +3834,13 @@ i40e_flow_check(struct rte_eth_dev *dev,
i++;
}
item_num++;
-
- if (item_num <= ARRAY_SIZE(g_items)) {
- items = g_items;
- } else {
- items = rte_zmalloc("i40e_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
- if (!items) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL,
- "No memory for PMD internal items.");
- return -ENOMEM;
- }
+ items = calloc(item_num, sizeof(struct rte_flow_item));
+ if (items == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL,
+ "No memory for PMD internal items.");
+ return -ENOMEM;
}
i40e_pattern_skip_void_item(items, pattern);
@@ -3862,8 +3853,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- if (items != g_items)
- rte_free(items);
+ free(items);
return -rte_errno;
}
@@ -3873,8 +3863,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
- if (items != g_items)
- rte_free(items);
+ free(items);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 09/27] net/i40e: avoid rte malloc in tunnel set
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (7 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 08/27] net/i40e: remove global pattern variable Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (17 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when setting tunnel configuration, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as this memory is
not being stored anywhere, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 124 ++++++++++++---------------
1 file changed, 53 insertions(+), 71 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index d5c61cd577..06430e6319 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8511,38 +8511,27 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
struct i40e_pf_vf *vf = NULL;
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_vsi *vsi;
- struct i40e_aqc_cloud_filters_element_bb *cld_filter;
- struct i40e_aqc_cloud_filters_element_bb *pfilter;
+ struct i40e_aqc_cloud_filters_element_bb cld_filter = {0};
struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *tunnel, *node;
+ struct i40e_tunnel_filter *node;
struct i40e_tunnel_filter check_filter; /* Check if filter exists */
uint32_t teid_le;
bool big_buffer = 0;
- cld_filter = rte_zmalloc("tunnel_filter",
- sizeof(struct i40e_aqc_add_rm_cloud_filt_elem_ext),
- 0);
-
- if (cld_filter == NULL) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- return -ENOMEM;
- }
- pfilter = cld_filter;
-
rte_ether_addr_copy(&tunnel_filter->outer_mac,
- (struct rte_ether_addr *)&pfilter->element.outer_mac);
+ (struct rte_ether_addr *)&cld_filter.element.outer_mac);
rte_ether_addr_copy(&tunnel_filter->inner_mac,
- (struct rte_ether_addr *)&pfilter->element.inner_mac);
+ (struct rte_ether_addr *)&cld_filter.element.inner_mac);
- pfilter->element.inner_vlan =
+ cld_filter.element.inner_vlan =
rte_cpu_to_le_16(tunnel_filter->inner_vlan);
if (tunnel_filter->ip_type == I40E_TUNNEL_IPTYPE_IPV4) {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v4.data,
&ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
+ sizeof(cld_filter.element.ipaddr.v4.data));
} else {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
for (i = 0; i < 4; i++) {
@@ -8550,9 +8539,9 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
rte_cpu_to_le_32(rte_be_to_cpu_32(
tunnel_filter->ip_addr.ipv6_addr[i]));
}
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v6.data,
&convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
+ sizeof(cld_filter.element.ipaddr.v6.data));
}
/* check tunneled type */
@@ -8573,11 +8562,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x40;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOUDP;
@@ -8589,11 +8578,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x0;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOGRE;
@@ -8605,11 +8594,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8620,11 +8609,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8641,8 +8630,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
* Big Buffer should be set, see changes in
* i40e_aq_add_cloud_filters
*/
- pfilter->general_fields[0] = tunnel_filter->inner_vlan;
- pfilter->general_fields[1] = tunnel_filter->outer_vlan;
+ cld_filter.general_fields[0] = tunnel_filter->inner_vlan;
+ cld_filter.general_fields[1] = tunnel_filter->outer_vlan;
big_buffer = 1;
break;
case I40E_CLOUD_TYPE_UDP:
@@ -8657,20 +8646,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->sport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
} else {
@@ -8682,20 +8671,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->dport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
}
@@ -8704,48 +8693,46 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
default:
/* Other tunnel types is not supported. */
PMD_DRV_LOG(ERR, "tunnel type is not supported.");
- rte_free(cld_filter);
return -EINVAL;
}
if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoUDP)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoGRE)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPC)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPU)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_QINQ)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_SCTP) {
if (tunnel_filter->l4_port_type == I40E_L4_PORT_TYPE_SRC)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
} else {
val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
- &pfilter->element.flags);
+ &cld_filter.element.flags);
if (val < 0) {
- rte_free(cld_filter);
return -EINVAL;
}
}
- pfilter->element.flags |= rte_cpu_to_le_16(
- I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE |
- ip_type | (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
- pfilter->element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->element.queue_number =
+ cld_filter.element.flags |=
+ rte_cpu_to_le_16(I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+ (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
+ cld_filter.element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
+ cld_filter.element.queue_number =
rte_cpu_to_le_16(tunnel_filter->queue_id);
if (!tunnel_filter->is_to_vf)
@@ -8753,7 +8740,6 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
else {
if (tunnel_filter->vf_id >= pf->vf_num) {
PMD_DRV_LOG(ERR, "Invalid argument.");
- rte_free(cld_filter);
return -EINVAL;
}
vf = &pf->vfs[tunnel_filter->vf_id];
@@ -8762,38 +8748,36 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
/* Check if there is the filter in SW list */
memset(&check_filter, 0, sizeof(check_filter));
- i40e_tunnel_filter_convert(cld_filter, &check_filter);
+ i40e_tunnel_filter_convert(&cld_filter, &check_filter);
check_filter.is_to_vf = tunnel_filter->is_to_vf;
check_filter.vf_id = tunnel_filter->vf_id;
node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &check_filter.input);
if (add && node) {
PMD_DRV_LOG(ERR, "Conflict with existing tunnel rules!");
- rte_free(cld_filter);
return -EINVAL;
}
if (!add && !node) {
PMD_DRV_LOG(ERR, "There's no corresponding tunnel filter!");
- rte_free(cld_filter);
return -EINVAL;
}
if (add) {
+ struct i40e_tunnel_filter *tunnel;
+
if (big_buffer)
ret = i40e_aq_add_cloud_filters_bb(hw,
- vsi->seid, cld_filter, 1);
+ vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_add_cloud_filters(hw,
- vsi->seid, &cld_filter->element, 1);
+ vsi->seid, &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to add a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
tunnel = rte_zmalloc("tunnel_filter", sizeof(*tunnel), 0);
if (tunnel == NULL) {
PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- rte_free(cld_filter);
return -ENOMEM;
}
@@ -8804,19 +8788,17 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
} else {
if (big_buffer)
ret = i40e_aq_rem_cloud_filters_bb(
- hw, vsi->seid, cld_filter, 1);
+ hw, vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter->element, 1);
+ &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to delete a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
ret = i40e_sw_tunnel_filter_del(pf, &node->input);
}
- rte_free(cld_filter);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 10/27] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (8 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
` (16 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 06430e6319..654b0e5d16 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4630,7 +4630,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -4649,7 +4649,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
pf->adapter->rss_reta_updated = 1;
out:
- rte_free(lut);
+ free(lut);
return ret;
}
@@ -4673,7 +4673,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -4690,7 +4690,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (9 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:14 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
` (15 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding, removing, or configuring MAC and VLAN filters,
we are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 38 +++++++++++++--------------
drivers/net/intel/i40e/rte_pmd_i40e.c | 16 +++++------
2 files changed, 26 insertions(+), 28 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 654b0e5d16..806c29368c 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4165,8 +4165,7 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
i = 0;
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
+ mac_filter = calloc(num, sizeof(*mac_filter));
if (mac_filter == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -4206,7 +4205,7 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (ret)
PMD_DRV_LOG(ERR, "i40e vsi add mac fail.");
}
- rte_free(mac_filter);
+ free(mac_filter);
}
if (mask & RTE_ETH_QINQ_STRIP_MASK) {
@@ -6231,8 +6230,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
+ mac_filter = calloc(num, sizeof(*mac_filter));
if (mac_filter == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -6264,7 +6262,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
}
DONE:
- rte_free(mac_filter);
+ free(mac_filter);
return ret;
}
@@ -7154,7 +7152,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0);
+ req_list = calloc(1, ele_buff_size);
if (req_list == NULL) {
PMD_DRV_LOG(ERR, "Fail to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7207,7 +7205,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
} while (num < total);
DONE:
- rte_free(req_list);
+ free(req_list);
return ret;
}
@@ -7230,7 +7228,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0);
+ req_list = calloc(1, ele_buff_size);
if (req_list == NULL) {
PMD_DRV_LOG(ERR, "Fail to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7286,7 +7284,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
} while (num < total);
DONE:
- rte_free(req_list);
+ free(req_list);
return ret;
}
@@ -7455,7 +7453,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
else
num = vsi->mac_num * vsi->vlan_num;
- mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0);
+ mv_f = calloc(num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7484,7 +7482,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
ret = i40e_remove_macvlan_filters(vsi, mv_f, num);
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7510,7 +7508,7 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
return I40E_ERR_PARAM;
}
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
+ mv_f = calloc(mac_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
@@ -7532,7 +7530,7 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
vsi->vlan_num++;
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7561,7 +7559,7 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
return I40E_ERR_PARAM;
}
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
+ mv_f = calloc(mac_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
@@ -7594,7 +7592,7 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
vsi->vlan_num--;
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7626,7 +7624,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
mac_filter->filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7665,7 +7663,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7696,7 +7694,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7725,7 +7723,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index a358f68bc5..fb73fa924f 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -233,7 +233,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -250,18 +250,18 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
@@ -294,7 +294,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
f->mac_info.filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -312,18 +312,18 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 12/27] net/i40e: avoid rte malloc in VF resource queries
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (10 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:14 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
` (14 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when responding to VF resource queries, we are dynamically
allocating a temporary buffer with rte_zmalloc followed by an immediate
rte_free. This is not needed as the response is only used temporarily,
so replace it with a stack-allocated structure (the allocation is fixed
in size and pretty small).
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_pf.c | 26 +++++++-------------------
1 file changed, 7 insertions(+), 19 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_pf.c b/drivers/net/intel/i40e/i40e_pf.c
index ebe1deeade..2a5637b0c1 100644
--- a/drivers/net/intel/i40e/i40e_pf.c
+++ b/drivers/net/intel/i40e/i40e_pf.c
@@ -309,9 +309,14 @@ static int
i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
bool b_op)
{
- struct virtchnl_vf_resource *vf_res = NULL;
+ /* only have 1 VSI by default */
+ struct {
+ struct virtchnl_vf_resource vf_res;
+ struct virtchnl_vsi_resource vsi_res;
+ } res = {0};
+ struct virtchnl_vf_resource *vf_res = &res.vf_res;
+ uint32_t len = sizeof(res);
struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
- uint32_t len = 0;
uint64_t default_hena = I40E_RSS_HENA_ALL;
int ret = I40E_SUCCESS;
@@ -322,20 +327,6 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
return ret;
}
- /* only have 1 VSI by default */
- len = sizeof(struct virtchnl_vf_resource) +
- I40E_DEFAULT_VF_VSI_NUM *
- sizeof(struct virtchnl_vsi_resource);
-
- vf_res = rte_zmalloc("i40e_vf_res", len, 0);
- if (vf_res == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate mem");
- ret = I40E_ERR_NO_MEMORY;
- vf_res = NULL;
- len = 0;
- goto send_msg;
- }
-
if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
VIRTCHNL_VF_OFFLOAD_VLAN;
@@ -377,11 +368,8 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
rte_ether_addr_copy(&vf->mac_addr,
(struct rte_ether_addr *)vf_res->vsi_res[0].default_mac_addr);
-send_msg:
i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_GET_VF_RESOURCES,
ret, (uint8_t *)vf_res, len);
- rte_free(vf_res);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 13/27] net/i40e: avoid rte malloc in adminq operations
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (11 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:15 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
` (13 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing admin queue messages, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as the message
buffer is only used temporarily within the function scope, so replace it
with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 806c29368c..2e0c2e2482 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6896,7 +6896,7 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
int ret;
info.buf_len = I40E_AQ_BUF_SZ;
- info.msg_buf = rte_zmalloc("msg_buffer", info.buf_len, 0);
+ info.msg_buf = calloc(1, info.buf_len);
if (!info.msg_buf) {
PMD_DRV_LOG(ERR, "Failed to allocate mem");
return;
@@ -6936,7 +6936,7 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
break;
}
}
- rte_free(info.msg_buf);
+ free(info.msg_buf);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 14/27] net/i40e: avoid rte malloc in DDP package handling
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (12 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:15 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
` (12 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing Dynamic Driver Profile (DDP) packages and
checking profile information, we are using rte_zmalloc followed by
immediate rte_free. This is not needed as these buffers are only used
temporarily, so replace it with stack-allocated structures or regular
malloc/free where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/rte_pmd_i40e.c | 43 +++++++++------------------
1 file changed, 14 insertions(+), 29 deletions(-)
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index fb73fa924f..a2e24b5ea2 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -1569,9 +1569,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
PMD_DRV_LOG(INFO, "Read-only profile.");
return 0;
}
- buff = rte_zmalloc("pinfo_list",
- (I40E_PROFILE_INFO_SIZE * I40E_MAX_PROFILE_NUM + 4),
- 0);
+ buff = calloc(I40E_MAX_PROFILE_NUM + 4, I40E_PROFILE_INFO_SIZE);
if (!buff) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return -1;
@@ -1583,7 +1581,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
0, NULL);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get profile info list.");
- rte_free(buff);
+ free(buff);
return -1;
}
p_list = (struct rte_pmd_i40e_profile_list *)buff;
@@ -1591,20 +1589,20 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
p = &p_list->p_info[i];
if (pinfo->track_id == p->track_id) {
PMD_DRV_LOG(INFO, "Profile exists.");
- rte_free(buff);
+ free(buff);
return 1;
}
}
/* profile with group id 0xff is compatible with any other profile */
if ((pinfo->track_id & group_mask) == group_mask) {
- rte_free(buff);
+ free(buff);
return 0;
}
for (i = 0; i < p_list->p_count; i++) {
p = &p_list->p_info[i];
if ((p->track_id & group_mask) == 0) {
PMD_DRV_LOG(INFO, "Profile of the group 0 exists.");
- rte_free(buff);
+ free(buff);
return 2;
}
}
@@ -1615,12 +1613,12 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
if ((pinfo->track_id & group_mask) !=
(p->track_id & group_mask)) {
PMD_DRV_LOG(INFO, "Profile of different group exists.");
- rte_free(buff);
+ free(buff);
return 3;
}
}
- rte_free(buff);
+ free(buff);
return 0;
}
@@ -1636,7 +1634,10 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
struct i40e_generic_seg_header *profile_seg_hdr;
struct i40e_generic_seg_header *metadata_seg_hdr;
uint32_t track_id;
- uint8_t *profile_info_sec;
+ struct {
+ struct i40e_profile_section_header sec;
+ struct i40e_profile_info info;
+ } profile_info_sec = {0};
int is_exist;
enum i40e_status_code status = I40E_SUCCESS;
static const uint32_t type_mask = 0xff000000;
@@ -1701,26 +1702,15 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
return -EINVAL;
}
- profile_info_sec = rte_zmalloc(
- "i40e_profile_info",
- sizeof(struct i40e_profile_section_header) +
- sizeof(struct i40e_profile_info),
- 0);
- if (!profile_info_sec) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- return -EINVAL;
- }
-
/* Check if the profile already loaded */
i40e_generate_profile_info_sec(
((struct i40e_profile_segment *)profile_seg_hdr)->name,
&((struct i40e_profile_segment *)profile_seg_hdr)->version,
- track_id, profile_info_sec,
+ track_id, (uint8_t* )&profile_info_sec,
op == RTE_PMD_I40E_PKG_OP_WR_ADD);
- is_exist = i40e_check_profile_info(port, profile_info_sec);
+ is_exist = i40e_check_profile_info(port, (uint8_t* )&profile_info_sec);
if (is_exist < 0) {
PMD_DRV_LOG(ERR, "Failed to check profile.");
- rte_free(profile_info_sec);
return -EINVAL;
}
@@ -1733,13 +1723,11 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
else if (is_exist == 3)
PMD_DRV_LOG(ERR, "Profile of different group already exists");
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return -EEXIST;
}
} else if (op == RTE_PMD_I40E_PKG_OP_WR_DEL) {
if (is_exist != 1) {
PMD_DRV_LOG(ERR, "Profile does not exist.");
- rte_free(profile_info_sec);
return -EACCES;
}
}
@@ -1751,7 +1739,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
track_id);
if (status) {
PMD_DRV_LOG(ERR, "Failed to write profile for delete.");
- rte_free(profile_info_sec);
return status;
}
} else {
@@ -1764,14 +1751,13 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
PMD_DRV_LOG(ERR, "Failed to write profile for add.");
else
PMD_DRV_LOG(ERR, "Failed to write profile.");
- rte_free(profile_info_sec);
return status;
}
}
if (track_id && (op != RTE_PMD_I40E_PKG_OP_WR_ONLY)) {
/* Modify loaded profiles info list */
- status = i40e_add_rm_profile_info(hw, profile_info_sec);
+ status = i40e_add_rm_profile_info(hw, (uint8_t* )&profile_info_sec);
if (status) {
if (op == RTE_PMD_I40E_PKG_OP_WR_ADD)
PMD_DRV_LOG(ERR, "Failed to add profile to info list.");
@@ -1784,7 +1770,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
op == RTE_PMD_I40E_PKG_OP_WR_DEL)
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return status;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 15/27] net/i40e: avoid rte malloc in DDP ptype handling
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (13 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:15 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
` (11 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating customized protocol and packet type information
via DDP packages, we are using rte_zmalloc followed by immediate rte_free.
This is not needed as these buffers are only used temporarily within
the function scope, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 25 ++++++++++++-------------
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 2e0c2e2482..f27fbf89ee 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -11787,7 +11787,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = pctype_num * sizeof(struct rte_pmd_i40e_proto_info);
- pctype = rte_zmalloc("new_pctype", buff_size, 0);
+ pctype = calloc(pctype_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!pctype) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return -1;
@@ -11798,7 +11798,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PCTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get pctype list");
- rte_free(pctype);
+ free(pctype);
return -1;
}
@@ -11879,7 +11879,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
}
- rte_free(pctype);
+ free(pctype);
return 0;
}
@@ -11925,7 +11925,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_info);
- ptype = rte_zmalloc("new_ptype", buff_size, 0);
+ ptype = calloc(ptype_num, sizeof(struct rte_pmd_i40e_ptype_info));
if (!ptype) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return -1;
@@ -11937,15 +11937,14 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get ptype list");
- rte_free(ptype);
+ free(ptype);
return ret;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_mapping);
- ptype_mapping = rte_zmalloc("ptype_mapping", buff_size, 0);
+ ptype_mapping = calloc(ptype_num, sizeof(struct rte_pmd_i40e_ptype_mapping));
if (!ptype_mapping) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
- rte_free(ptype);
+ free(ptype);
return -1;
}
@@ -12083,8 +12082,8 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(ERR, "Failed to update ptype mapping table.");
- rte_free(ptype_mapping);
- rte_free(ptype);
+ free(ptype_mapping);
+ free(ptype);
return ret;
}
@@ -12119,7 +12118,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = proto_num * sizeof(struct rte_pmd_i40e_proto_info);
- proto = rte_zmalloc("new_proto", buff_size, 0);
+ proto = calloc(proto_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!proto) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return;
@@ -12131,7 +12130,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PROTOCOL_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get protocol list");
- rte_free(proto);
+ free(proto);
return;
}
@@ -12169,7 +12168,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(INFO, "No ptype is updated.");
- rte_free(proto);
+ free(proto);
}
/* Create a QinQ cloud filter
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 16/27] net/iavf: remove remnants of pipeline mode
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (14 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:16 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
` (10 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
When pipeline mode was removed, some of the things used by pipelines were
left in the code. Remove them as they are unused.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
drivers/net/intel/iavf/iavf_hash.c | 1 -
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
5 files changed, 19 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c
index 0ef6e0d04a..9eae874800 100644
--- a/drivers/net/intel/iavf/iavf_fdir.c
+++ b/drivers/net/intel/iavf/iavf_fdir.c
@@ -1632,7 +1632,6 @@ static struct iavf_flow_parser iavf_fdir_parser = {
.array = iavf_fdir_pattern,
.array_len = RTE_DIM(iavf_fdir_pattern),
.parse_pattern_action = iavf_fdir_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fdir_engine_register)
diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c
index cf1030320f..bfb34695de 100644
--- a/drivers/net/intel/iavf/iavf_fsub.c
+++ b/drivers/net/intel/iavf/iavf_fsub.c
@@ -814,7 +814,6 @@ iavf_flow_parser iavf_fsub_parser = {
.array = iavf_fsub_pattern_list,
.array_len = RTE_DIM(iavf_fsub_pattern_list),
.parse_pattern_action = iavf_fsub_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fsub_engine_init)
diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h
index b11bb4cf2b..b97cf8b7ff 100644
--- a/drivers/net/intel/iavf/iavf_generic_flow.h
+++ b/drivers/net/intel/iavf/iavf_generic_flow.h
@@ -485,20 +485,6 @@ enum iavf_flow_engine_type {
IAVF_FLOW_ENGINE_MAX,
};
-/**
- * classification stages.
- * for non-pipeline mode, we have two classification stages: Distributor/RSS
- * for pipeline-mode we have three classification stages:
- * Permission/Distributor/RSS
- */
-enum iavf_flow_classification_stage {
- IAVF_FLOW_STAGE_NONE = 0,
- IAVF_FLOW_STAGE_IPSEC_CRYPTO,
- IAVF_FLOW_STAGE_RSS,
- IAVF_FLOW_STAGE_DISTRIBUTOR,
- IAVF_FLOW_STAGE_MAX,
-};
-
/* Struct to store engine created. */
struct iavf_flow_engine {
TAILQ_ENTRY(iavf_flow_engine) node;
@@ -527,7 +513,6 @@ struct iavf_flow_parser {
struct iavf_pattern_match_item *array;
uint32_t array_len;
parse_pattern_action_t parse_pattern_action;
- enum iavf_flow_classification_stage stage;
};
/* Struct to store parser created. */
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index 1725c2b2b9..a40fed7542 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -691,7 +691,6 @@ static struct iavf_flow_parser iavf_hash_parser = {
.array = iavf_hash_pattern_list,
.array_len = RTE_DIM(iavf_hash_pattern_list),
.parse_pattern_action = iavf_hash_parse_pattern_action,
- .stage = IAVF_FLOW_STAGE_RSS,
};
int
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 6d41b1744e..66eaea8715 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -1983,7 +1983,6 @@ static struct iavf_flow_parser iavf_ipsec_flow_parser = {
.array = iavf_ipsec_flow_pattern,
.array_len = RTE_DIM(iavf_ipsec_flow_pattern),
.parse_pattern_action = iavf_ipsec_flow_parse,
- .stage = IAVF_FLOW_STAGE_IPSEC_CRYPTO,
};
RTE_INIT(iavf_ipsec_flow_engine_register)
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (15 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:19 ` Bruce Richardson
2026-02-16 22:25 ` Stephen Hemminger
2026-02-13 10:26 ` [PATCH v4 18/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
` (9 subsequent siblings)
26 siblings, 2 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when calling down into the VF mailbox, IPsec code will use
dynamic memory allocation (rte_malloc one at that!) to allocate VF message
structures which are ~40 bytes in size, and then immediately frees them.
This is wasteful and unnecessary, so use stack allocation instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 157 +++++++--------------
1 file changed, 51 insertions(+), 106 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 66eaea8715..cb437d3212 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -458,36 +458,24 @@ static uint32_t
iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
struct rte_security_session_conf *conf)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- struct virtchnl_ipsec_sa_cfg *sa_cfg;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg sa_cfg;
+ } sa_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp;
+ } sa_resp;
+ struct inline_ipsec_msg *request = &sa_req.msg, *response = &sa_resp.msg;
+ struct virtchnl_ipsec_sa_cfg *sa_cfg = &sa_req.sa_cfg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg);
-
- request = rte_malloc("iavf-sad-add-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg_resp);
- response = rte_malloc("iavf-sad-add-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
/* set SA configuration params */
- sa_cfg = (struct virtchnl_ipsec_sa_cfg *)(request + 1);
-
sa_cfg->spi = conf->ipsec.spi;
sa_cfg->virtchnl_protocol_type = VIRTCHNL_PROTO_ESP;
sa_cfg->virtchnl_direction =
@@ -541,10 +529,10 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sa_req),
+ (uint8_t *)response, sizeof(sa_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -552,9 +540,6 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
rc = -EFAULT;
else
rc = response->ipsec_data.sa_cfg_resp->sa_handle;
-update_cleanup:
- rte_free(response);
- rte_free(request);
return rc;
}
@@ -722,19 +707,18 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
bool is_udp,
uint16_t udp_port)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg sp_cfg;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg_resp sp_cfg_resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg);
- request = rte_malloc("iavf-inbound-security-policy-add-request",
- request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -768,21 +752,12 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
request->ipsec_data.sp_cfg->is_udp = is_udp;
request->ipsec_data.sp_cfg->udp_port = htons(udp_port);
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg_resp);
- response = rte_malloc("iavf-inbound-security-policy-add-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -791,10 +766,6 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.sp_cfg_resp->rule_id;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -802,26 +773,17 @@ static uint32_t
iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_update sa_update;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp ipsec_resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg, *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_update);
- request = rte_malloc("iavf-sa-update-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-update-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_UPDATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -833,10 +795,10 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -845,10 +807,6 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.ipsec_resp->resp;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -899,26 +857,17 @@ int
iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
uint8_t is_v4, uint32_t flow_id)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_destroy sp_destroy;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg, *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_destroy);
- request = rte_malloc("iavf-sp-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sp-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -931,21 +880,17 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
response->req_id != request->req_id)
rc = -EFAULT;
else
- return response->ipsec_data.ipsec_status->status;
-
-update_cleanup:
- rte_free(request);
- rte_free(response);
+ rc = response->ipsec_data.ipsec_status->status;
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 18/27] net/iavf: decouple hash uninit from parser uninit
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (16 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:23 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
` (8 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, parser deinitialization will trigger removal of current RSS
configuration. This should not be done as part of parser deinitialization,
but should rather be a separate step in dev close flow.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++++
drivers/net/intel/iavf/iavf_hash.c | 13 +++++++++----
drivers/net/intel/iavf/iavf_hash.h | 13 +++++++++++++
3 files changed, 26 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/intel/iavf/iavf_hash.h
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 802e095174..70eb7e7ec5 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -35,6 +35,7 @@
#include "iavf_generic_flow.h"
#include "rte_pmd_iavf.h"
#include "iavf_ipsec_crypto.h"
+#include "iavf_hash.h"
/* devargs */
#define IAVF_PROTO_XTR_ARG "proto_xtr"
@@ -2972,6 +2973,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
/* free iAVF security device context all related resources */
iavf_security_ctx_destroy(adapter);
+ /* remove RSS configuration */
+ iavf_hash_uninit(adapter);
+
iavf_flow_flush(dev, NULL);
iavf_flow_uninit(adapter);
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index a40fed7542..d864998402 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -22,6 +22,7 @@
#include "iavf_log.h"
#include "iavf.h"
#include "iavf_generic_flow.h"
+#include "iavf_hash.h"
#define IAVF_PHINT_NONE 0
#define IAVF_PHINT_GTPU BIT_ULL(0)
@@ -77,7 +78,7 @@ static int
iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
struct rte_flow_error *error);
static void
-iavf_hash_uninit(struct iavf_adapter *ad);
+iavf_hash_uninit_parser(struct iavf_adapter *ad);
static void
iavf_hash_free(struct rte_flow *flow);
static int
@@ -680,7 +681,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
.init = iavf_hash_init,
.create = iavf_hash_create,
.destroy = iavf_hash_destroy,
- .uninit = iavf_hash_uninit,
+ .uninit = iavf_hash_uninit_parser,
.free = iavf_hash_free,
.type = IAVF_FLOW_ENGINE_HASH,
};
@@ -1641,6 +1642,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
}
static void
+iavf_hash_uninit_parser(struct iavf_adapter *ad)
+{
+ iavf_unregister_parser(&iavf_hash_parser, ad);
+}
+
+void
iavf_hash_uninit(struct iavf_adapter *ad)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
@@ -1658,8 +1665,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
PMD_DRV_LOG(ERR, "fail to delete default RSS");
-
- iavf_unregister_parser(&iavf_hash_parser, ad);
}
static void
diff --git a/drivers/net/intel/iavf/iavf_hash.h b/drivers/net/intel/iavf/iavf_hash.h
new file mode 100644
index 0000000000..2348f32673
--- /dev/null
+++ b/drivers/net/intel/iavf/iavf_hash.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Intel Corporation
+ */
+
+#ifndef _IAVF_HASH_H_
+#define _IAVF_HASH_H_
+
+#include "iavf.h"
+
+void
+iavf_hash_uninit(struct iavf_adapter *ad);
+
+#endif /* _IAVF_HASH_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 19/27] net/iavf: avoid rte malloc in RSS configuration
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (17 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 18/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:24 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
` (7 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring RSS (redirection table, lookup table, and
hash key), we are using rte_zmalloc followed by an immediate rte_free.
This is not needed as this memory is not being stored anywhere, so
replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++--
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 70eb7e7ec5..d3fa47fd5e 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1554,7 +1554,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1574,7 +1574,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = iavf_configure_rss_lut(adapter);
if (ret) /* revert back */
rte_memcpy(vf->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 9ad39300c6..55986ef909 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1159,7 +1159,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
- rss_lut = rte_zmalloc("rss_lut", len, 0);
+ rss_lut = calloc(1, len);
if (!rss_lut)
return -ENOMEM;
@@ -1178,7 +1178,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_LUT");
- rte_free(rss_lut);
+ free(rss_lut);
return err;
}
@@ -1191,7 +1191,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
- rss_key = rte_zmalloc("rss_key", len, 0);
+ rss_key = calloc(1, len);
if (!rss_key)
return -ENOMEM;
@@ -1210,7 +1210,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_KEY");
- rte_free(rss_key);
+ free(rss_key);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (18 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:27 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
` (6 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This is not needed as this
memory is not being stored anywhere, so replace it with regular
malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 55986ef909..19dce17612 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1402,7 +1402,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
}
}
- list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
+ list = calloc(1, len);
if (!list) {
PMD_DRV_LOG(ERR, "fail to allocate memory");
return;
@@ -1434,7 +1434,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
+ free(list);
begin = next_begin;
} while (begin < IAVF_NUM_MACADDR_MAX);
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 21/27] net/iavf: avoid rte malloc in IPsec operations
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (19 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:30 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
` (5 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when performing IPsec security association operations and
retrieving device capabilities, we are using rte_malloc followed by
immediate rte_free. This is not needed as these structures are only
used temporarily, so replace it with stack-allocated structures for
small fixed-size messages and regular malloc/free for variable-sized
buffers.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 58 ++++++++--------------
1 file changed, 21 insertions(+), 37 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index cb437d3212..1a3004b0fc 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -3,6 +3,7 @@
*/
#include <stdalign.h>
+#include <stdlib.h>
#include <rte_cryptodev.h>
#include <rte_ethdev.h>
@@ -899,29 +900,18 @@ static uint32_t
iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_destroy sa_destroy;
+ } req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } resp;
+ struct inline_ipsec_msg *request = &req.msg, *response = &resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_destroy);
-
- request = rte_malloc("iavf-sa-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
-
- response = rte_malloc("iavf-sa-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -939,10 +929,10 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(req),
+ (uint8_t *)response, sizeof(resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -957,10 +947,6 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
response->ipsec_data.ipsec_status->status)
rc = -EFAULT;
-update_cleanup:
- rte_free(response);
- rte_free(request);
-
return rc;
}
@@ -1113,7 +1099,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-capability-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1121,8 +1107,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-capability-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1148,8 +1133,8 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
memcpy(capability, response->ipsec_data.ipsec_cap, sizeof(*capability));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
@@ -1538,7 +1523,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-status-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1546,8 +1531,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-status-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1573,8 +1557,8 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
memcpy(status, response->ipsec_data.ipsec_status, sizeof(*status));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 22/27] net/iavf: avoid rte malloc in queue operations
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (20 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:31 ` Bruce Richardson
2026-02-16 22:24 ` Stephen Hemminger
2026-02-13 10:26 ` [PATCH v4 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
` (4 subsequent siblings)
26 siblings, 2 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when enabling, disabling, or switching queues, we are using
rte_malloc followed by an immediate rte_free. This is not needed as these
structures are not being stored anywhere, so replace them with stack
allocation or malloc/free where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 65 ++++++++++++-----------------
1 file changed, 26 insertions(+), 39 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 19dce17612..af1f5fbfc0 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1020,19 +1020,15 @@ int
iavf_enable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1048,7 +1044,7 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_ENABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1056,7 +1052,6 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_ENABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1064,19 +1059,15 @@ int
iavf_disable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1092,7 +1083,7 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1100,7 +1091,6 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1109,17 +1099,15 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
bool rx, bool on)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req;
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = 1;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1138,7 +1126,7 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
else
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1146,7 +1134,6 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
PMD_DRV_LOG(ERR, "Failed to execute command of %s",
on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1229,7 +1216,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
size = sizeof(*vc_config) +
sizeof(vc_config->qpair[0]) * num_queue_pairs;
- vc_config = rte_zmalloc("cfg_queue", size, 0);
+ vc_config = calloc(1, size);
if (!vc_config)
return -ENOMEM;
@@ -1292,7 +1279,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
PMD_DRV_LOG(ERR, "Failed to execute command of"
" VIRTCHNL_OP_CONFIG_VSI_QUEUES");
- rte_free(vc_config);
+ free(vc_config);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 23/27] net/iavf: avoid rte malloc in irq map config
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (21 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:31 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (3 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring IRQ maps, we are using rte_zmalloc followed
by an immediate rte_free. This is not needed as this memory is not being
stored anywhere, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index af1f5fbfc0..d0cc8673e1 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1295,7 +1295,7 @@ iavf_config_irq_map(struct iavf_adapter *adapter)
len = sizeof(struct virtchnl_irq_map_info) +
sizeof(struct virtchnl_vector_map) * vf->nb_msix;
- map_info = rte_zmalloc("map_info", len, 0);
+ map_info = calloc(1, len);
if (!map_info)
return -ENOMEM;
@@ -1319,7 +1319,7 @@ iavf_config_irq_map(struct iavf_adapter *adapter)
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
- rte_free(map_info);
+ free(map_info);
return err;
}
@@ -1337,7 +1337,7 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
len = sizeof(struct virtchnl_queue_vector_maps) +
sizeof(struct virtchnl_queue_vector) * (num - 1);
- map_info = rte_zmalloc("map_info", len, 0);
+ map_info = calloc(1, len);
if (!map_info)
return -ENOMEM;
@@ -1360,7 +1360,7 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
- rte_free(map_info);
+ free(map_info);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 24/27] net/ice: avoid rte malloc in RSS RETA operations
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (22 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:32 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
` (2 subsequent siblings)
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
drivers/net/intel/ice/ice_ethdev.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 81da5a4656..037382b336 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -1338,7 +1338,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1358,7 +1358,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = ice_dcf_configure_rss_lut(hw);
if (ret) /* revert back */
rte_memcpy(hw->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_ethdev.c b/drivers/net/intel/ice/ice_ethdev.c
index ade13600de..fbd7c0f2f2 100644
--- a/drivers/net/intel/ice/ice_ethdev.c
+++ b/drivers/net/intel/ice/ice_ethdev.c
@@ -5583,7 +5583,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
/* It MUST use the current LUT size to get the RSS lookup table,
* otherwise if will fail with -100 error code.
*/
- lut = rte_zmalloc(NULL, RTE_MAX(reta_size, lut_size), 0);
+ lut = calloc(1, RTE_MAX(reta_size, lut_size));
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -5607,7 +5607,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
@@ -5632,7 +5632,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc(NULL, reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -5650,7 +5650,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 25/27] net/ice: avoid rte malloc in MAC address operations
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (23 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:33 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This is not needed as this
memory is not being stored anywhere, so replace it with regular
calloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 037382b336..d2a7a2847b 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -936,7 +936,7 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
len = sizeof(struct virtchnl_ether_addr_list);
len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
- list = rte_zmalloc(NULL, len, 0);
+ list = calloc(1, len);
if (!list) {
PMD_DRV_LOG(ERR, "fail to allocate memory");
return -ENOMEM;
@@ -961,7 +961,7 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
+ free(list);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 26/27] net/ice: avoid rte malloc in raw pattern parsing
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (24 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:34 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when parsing raw flow patterns, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as this memory is
not being stored anywhere, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_fdir_filter.c | 14 +++++++-------
drivers/net/intel/ice/ice_hash.c | 10 +++++-----
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index da22b65a77..5f44b5c818 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -1879,13 +1879,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
pkt_len)
return -rte_errno;
- tmp_spec = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_spec = calloc(1, pkt_len / 2);
if (!tmp_spec)
return -rte_errno;
- tmp_mask = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_mask = calloc(1, pkt_len / 2);
if (!tmp_mask) {
- rte_free(tmp_spec);
+ free(tmp_spec);
return -rte_errno;
}
@@ -1950,13 +1950,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
filter->parser_ena = true;
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
break;
raw_error:
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
return ret_val;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index afdc8f220a..854c6e8dca 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -676,13 +676,13 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
pkt_len = spec_len / 2;
- pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+ pkt_buf = calloc(1, pkt_len);
if (!pkt_buf)
return -ENOMEM;
- msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+ msk_buf = calloc(1, pkt_len);
if (!msk_buf) {
- rte_free(pkt_buf);
+ free(pkt_buf);
return -ENOMEM;
}
@@ -733,8 +733,8 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
rte_memcpy(&meta->raw.prof, &prof, sizeof(prof));
free_mem:
- rte_free(pkt_buf);
- rte_free(msk_buf);
+ free(pkt_buf);
+ free(msk_buf);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v4 27/27] net/ice: avoid rte malloc in flow pattern match
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (25 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
@ 2026-02-13 10:26 ` Anatoly Burakov
2026-02-16 17:37 ` Bruce Richardson
26 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-13 10:26 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when allocating buffers for pattern match items and flow item
storage, we are using rte_zmalloc followed by immediate rte_free. This is
not needed as these buffers are only used temporarily within the function
scope, so replace it with regular calloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_acl_filter.c | 3 ++-
drivers/net/intel/ice/ice_fdir_filter.c | 5 +++--
drivers/net/intel/ice/ice_generic_flow.c | 15 +++++++--------
drivers/net/intel/ice/ice_hash.c | 3 ++-
drivers/net/intel/ice/ice_switch_filter.c | 5 +++--
5 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c
index 38e30a4f62..6754a40044 100644
--- a/drivers/net/intel/ice/ice_acl_filter.c
+++ b/drivers/net/intel/ice/ice_acl_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1009,7 +1010,7 @@ ice_acl_parse(struct ice_adapter *ad,
*meta = filter;
error:
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 5f44b5c818..8cca831fa9 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -3,6 +3,7 @@
*/
#include <stdio.h>
+#include <stdlib.h>
#include <rte_flow.h>
#include <rte_hash.h>
#include <rte_hash_crc.h>
@@ -2504,11 +2505,11 @@ ice_fdir_parse(struct ice_adapter *ad,
rte_free(filter->pkt_buf);
}
- rte_free(item);
+ free(item);
return ret;
error:
rte_free(filter->pkt_buf);
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c
index 4049157eab..3f7a9f4714 100644
--- a/drivers/net/intel/ice/ice_generic_flow.c
+++ b/drivers/net/intel/ice/ice_generic_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -2136,19 +2137,17 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
}
item_num++;
- items = rte_zmalloc("ice_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
+ items = calloc(item_num, sizeof(struct rte_flow_item));
if (!items) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
NULL, "No memory for PMD internal items.");
return NULL;
}
- pattern_match_item = rte_zmalloc("ice_pattern_match_item",
- sizeof(struct ice_pattern_match_item), 0);
+ pattern_match_item = calloc(1, sizeof(struct ice_pattern_match_item));
if (!pattern_match_item) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE,
NULL, "Failed to allocate memory.");
- rte_free(items);
+ free(items);
return NULL;
}
@@ -2167,7 +2166,7 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
pattern_match_item->pattern_list =
array[i].pattern_list;
pattern_match_item->meta = array[i].meta;
- rte_free(items);
+ free(items);
return pattern_match_item;
}
}
@@ -2175,8 +2174,8 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
unsupported:
rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- rte_free(items);
- rte_free(pattern_match_item);
+ free(items);
+ free(pattern_match_item);
return NULL;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index 854c6e8dca..1174c505da 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -1211,7 +1212,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad,
*meta = rss_meta_ptr;
else
rte_free(rss_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c
index 28bc775a2c..b25e5eaad3 100644
--- a/drivers/net/intel/ice/ice_switch_filter.c
+++ b/drivers/net/intel/ice/ice_switch_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1877,14 +1878,14 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
rte_free(sw_meta_ptr);
}
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return 0;
error:
rte_free(list);
rte_free(sw_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return -rte_errno;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* Re: [PATCH v3 06/27] net/i40e: use unsigned types for queue comparisons
2026-02-11 21:03 ` Morten Brørup
@ 2026-02-13 10:30 ` Burakov, Anatoly
0 siblings, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-13 10:30 UTC (permalink / raw)
To: Morten Brørup, dev, Bruce Richardson
On 2/11/2026 10:03 PM, Morten Brørup wrote:
>> Currently, when we compare queue numbers against maximum traffic class
>> value of 64, we do not use unsigned values, which results in compiler
>> warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
>> value. Make it unsigned, and adjust callers to use correct types. As a
>> consequence, `i40e_align_floor` now returns unsigned value as well -
>> this
>> is correct, because nothing about that function implies signed usage
>> being
>> a valid use case.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>> drivers/net/intel/i40e/i40e_ethdev.c | 2 +-
>> drivers/net/intel/i40e/i40e_ethdev.h | 6 +++---
>> drivers/net/intel/i40e/i40e_hash.c | 4 ++--
>> 3 files changed, 6 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/net/intel/i40e/i40e_ethdev.c
>> b/drivers/net/intel/i40e/i40e_ethdev.c
>> index 2deb87b01b..d5c61cd577 100644
>> --- a/drivers/net/intel/i40e/i40e_ethdev.c
>> +++ b/drivers/net/intel/i40e/i40e_ethdev.c
>> @@ -9058,7 +9058,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
>> struct i40e_hw *hw = &pf->adapter->hw;
>> uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
>> uint32_t i;
>> - int num;
>> + size_t num;
>
> Why not just unsigned int? size_t seems weird when not counting bytes.
>
> Or uint16_t, considering its use.
>
>> struct i40e_pf *pf;
>> struct i40e_hw *hw;
>> uint16_t i;
>> - int max_queue;
>> + size_t max_queue;
>
> Why not just unsigned int? size_t seems weird when not counting bytes.
>
> Or uint16_t, like rss_act->queue[i].
> But then I40E_MAX_Q_PER_TC should maybe also be defined as UINT16_C(64), and maybe more should be uint16_t too.
Good points. Missed this in v4 respin so will address in v5. Thanks for
the review!
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 01/27] net/ixgbe: remove MAC type check macros
2026-02-13 10:26 ` [PATCH v4 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-13 10:44 ` Burakov, Anatoly
2026-02-16 16:58 ` Bruce Richardson
1 sibling, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-13 10:44 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
On 2/13/2026 11:26 AM, Anatoly Burakov wrote:
> The macros used were not informative and did not add any value beyond code
> golf, so remove them and make MAC type checks explicit.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
Missed from v3
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 02/27] net/ixgbe: remove security-related ifdefery
2026-02-13 10:26 ` [PATCH v4 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-13 10:44 ` Burakov, Anatoly
0 siblings, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-13 10:44 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
On 2/13/2026 11:26 AM, Anatoly Burakov wrote:
> The security library is specified as explicit dependency for ixgbe, so
> there is no more need to gate features behind #ifdef blocks that depend
> on presence of this library.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
Missed from v3
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 01/27] net/ixgbe: remove MAC type check macros
2026-02-13 10:26 ` [PATCH v4 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-13 10:44 ` Burakov, Anatoly
@ 2026-02-16 16:58 ` Bruce Richardson
2026-02-17 12:50 ` Burakov, Anatoly
1 sibling, 1 reply; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 16:58 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, Feb 13, 2026 at 10:26:12AM +0000, Anatoly Burakov wrote:
> The macros used were not informative and did not add any value beyond code
> golf, so remove them and make MAC type checks explicit.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
> drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
> 2 files changed, 17 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> index 5dbd659941..7dc02a472b 100644
> --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> @@ -137,18 +137,6 @@
> #define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
> #define IXGBE_MAX_L2_TN_FILTER_NUM 128
>
> -#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
> - if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
> - return -ENOTSUP;\
> -} while (0)
> -
> -#define MAC_TYPE_FILTER_SUP(type) do {\
> - if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
> - (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
> - (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
> - return -ENOTSUP;\
> -} while (0)
> -
Ack for removing the former. For the latter, since the list is longer and
the code is used twice, I'd be tempted to convert to an inline function
taking in struct hw and returning type bool. WDYT?
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 10/27] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-11 13:52 ` [PATCH v3 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-16 17:06 ` Bruce Richardson
2026-02-17 12:32 ` Burakov, Anatoly
0 siblings, 1 reply; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:06 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Wed, Feb 11, 2026 at 01:52:52PM +0000, Anatoly Burakov wrote:
> Currently, when updating or querying RSS redirection table (RETA), we
> are using rte_zmalloc followed by an immediate rte_free. This is not
> needed as this memory is not being stored anywhere, so replace it with
> regular malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_ethdev.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
> index 06430e6319..654b0e5d16 100644
> --- a/drivers/net/intel/i40e/i40e_ethdev.c
> +++ b/drivers/net/intel/i40e/i40e_ethdev.c
> @@ -4630,7 +4630,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
> return -EINVAL;
> }
>
> - lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
> + lut = calloc(1, reta_size);
> if (!lut) {
> PMD_DRV_LOG(ERR, "No memory can be allocated");
> return -ENOMEM;
> @@ -4649,7 +4649,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
> pf->adapter->rss_reta_updated = 1;
>
> out:
> - rte_free(lut);
> + free(lut);
>
> return ret;
> }
For i40e do we not have a reasonable max reta size that we could use for a
local array variable, save allocating and freeing entirely?
> @@ -4673,7 +4673,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
> return -EINVAL;
> }
>
> - lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
> + lut = calloc(1, reta_size);
> if (!lut) {
> PMD_DRV_LOG(ERR, "No memory can be allocated");
> return -ENOMEM;
> @@ -4690,7 +4690,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
> }
>
> out:
> - rte_free(lut);
> + free(lut);
>
> return ret;
> }
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering
2026-02-11 13:52 ` [PATCH v3 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
@ 2026-02-16 17:08 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:08 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Wed, Feb 11, 2026 at 01:52:53PM +0000, Anatoly Burakov wrote:
> Currently, when adding, removing, or configuring MAC and VLAN filters,
> we are using rte_zmalloc followed by an immediate rte_free. This is not
> needed as this memory is not being stored anywhere, so replace it with
> regular malloc/free.
A minor nit on a number of these patches and the commit logs. I suggest
expanding on the "not being stored anywhere" part to instead say that the
memory is a temporary scratchpad and so does not need to be in DPDK
hugepage memory.
For the code though:
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_ethdev.c | 38 +++++++++++++--------------
> drivers/net/intel/i40e/rte_pmd_i40e.c | 16 +++++------
> 2 files changed, 26 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
> index 654b0e5d16..806c29368c 100644
> --- a/drivers/net/intel/i40e/i40e_ethdev.c
> +++ b/drivers/net/intel/i40e/i40e_ethdev.c
> @@ -4165,8 +4165,7 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
> if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
> i = 0;
> num = vsi->mac_num;
> - mac_filter = rte_zmalloc("mac_filter_info_data",
> - num * sizeof(*mac_filter), 0);
> + mac_filter = calloc(num, sizeof(*mac_filter));
> if (mac_filter == NULL) {
> PMD_DRV_LOG(ERR, "failed to allocate memory");
> return I40E_ERR_NO_MEMORY;
> @@ -4206,7 +4205,7 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
> if (ret)
> PMD_DRV_LOG(ERR, "i40e vsi add mac fail.");
> }
> - rte_free(mac_filter);
> + free(mac_filter);
> }
>
> if (mask & RTE_ETH_QINQ_STRIP_MASK) {
> @@ -6231,8 +6230,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
>
> num = vsi->mac_num;
>
> - mac_filter = rte_zmalloc("mac_filter_info_data",
> - num * sizeof(*mac_filter), 0);
> + mac_filter = calloc(num, sizeof(*mac_filter));
> if (mac_filter == NULL) {
> PMD_DRV_LOG(ERR, "failed to allocate memory");
> return I40E_ERR_NO_MEMORY;
> @@ -6264,7 +6262,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
> }
>
> DONE:
> - rte_free(mac_filter);
> + free(mac_filter);
> return ret;
> }
>
> @@ -7154,7 +7152,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
> ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
> ele_buff_size = hw->aq.asq_buf_size;
>
> - req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0);
> + req_list = calloc(1, ele_buff_size);
> if (req_list == NULL) {
> PMD_DRV_LOG(ERR, "Fail to allocate memory");
> return I40E_ERR_NO_MEMORY;
> @@ -7207,7 +7205,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
> } while (num < total);
>
> DONE:
> - rte_free(req_list);
> + free(req_list);
> return ret;
> }
>
> @@ -7230,7 +7228,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
> ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
> ele_buff_size = hw->aq.asq_buf_size;
>
> - req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0);
> + req_list = calloc(1, ele_buff_size);
> if (req_list == NULL) {
> PMD_DRV_LOG(ERR, "Fail to allocate memory");
> return I40E_ERR_NO_MEMORY;
> @@ -7286,7 +7284,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
> } while (num < total);
>
> DONE:
> - rte_free(req_list);
> + free(req_list);
> return ret;
> }
>
> @@ -7455,7 +7453,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
> else
> num = vsi->mac_num * vsi->vlan_num;
>
> - mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0);
> + mv_f = calloc(num, sizeof(*mv_f));
> if (mv_f == NULL) {
> PMD_DRV_LOG(ERR, "failed to allocate memory");
> return I40E_ERR_NO_MEMORY;
> @@ -7484,7 +7482,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
>
> ret = i40e_remove_macvlan_filters(vsi, mv_f, num);
> DONE:
> - rte_free(mv_f);
> + free(mv_f);
>
> return ret;
> }
> @@ -7510,7 +7508,7 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
> return I40E_ERR_PARAM;
> }
>
> - mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
> + mv_f = calloc(mac_num, sizeof(*mv_f));
>
> if (mv_f == NULL) {
> PMD_DRV_LOG(ERR, "failed to allocate memory");
> @@ -7532,7 +7530,7 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
> vsi->vlan_num++;
> ret = I40E_SUCCESS;
> DONE:
> - rte_free(mv_f);
> + free(mv_f);
> return ret;
> }
>
> @@ -7561,7 +7559,7 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
> return I40E_ERR_PARAM;
> }
>
> - mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
> + mv_f = calloc(mac_num, sizeof(*mv_f));
>
> if (mv_f == NULL) {
> PMD_DRV_LOG(ERR, "failed to allocate memory");
> @@ -7594,7 +7592,7 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
> vsi->vlan_num--;
> ret = I40E_SUCCESS;
> DONE:
> - rte_free(mv_f);
> + free(mv_f);
> return ret;
> }
>
> @@ -7626,7 +7624,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
> mac_filter->filter_type == I40E_MAC_HASH_MATCH)
> vlan_num = 1;
>
> - mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
> + mv_f = calloc(vlan_num, sizeof(*mv_f));
> if (mv_f == NULL) {
> PMD_DRV_LOG(ERR, "failed to allocate memory");
> return I40E_ERR_NO_MEMORY;
> @@ -7665,7 +7663,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
>
> ret = I40E_SUCCESS;
> DONE:
> - rte_free(mv_f);
> + free(mv_f);
>
> return ret;
> }
> @@ -7696,7 +7694,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
> filter_type == I40E_MAC_HASH_MATCH)
> vlan_num = 1;
>
> - mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
> + mv_f = calloc(vlan_num, sizeof(*mv_f));
> if (mv_f == NULL) {
> PMD_DRV_LOG(ERR, "failed to allocate memory");
> return I40E_ERR_NO_MEMORY;
> @@ -7725,7 +7723,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
>
> ret = I40E_SUCCESS;
> DONE:
> - rte_free(mv_f);
> + free(mv_f);
> return ret;
> }
>
> diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
> index a358f68bc5..fb73fa924f 100644
> --- a/drivers/net/intel/i40e/rte_pmd_i40e.c
> +++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
> @@ -233,7 +233,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
> filter_type == I40E_MAC_HASH_MATCH)
> vlan_num = 1;
>
> - mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
> + mv_f = calloc(vlan_num, sizeof(*mv_f));
> if (!mv_f) {
> PMD_DRV_LOG(ERR, "failed to allocate memory");
> return I40E_ERR_NO_MEMORY;
> @@ -250,18 +250,18 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
> ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
> &f->mac_info.mac_addr);
> if (ret != I40E_SUCCESS) {
> - rte_free(mv_f);
> + free(mv_f);
> return ret;
> }
> }
>
> ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num);
> if (ret != I40E_SUCCESS) {
> - rte_free(mv_f);
> + free(mv_f);
> return ret;
> }
>
> - rte_free(mv_f);
> + free(mv_f);
> ret = I40E_SUCCESS;
> }
>
> @@ -294,7 +294,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
> f->mac_info.filter_type == I40E_MAC_HASH_MATCH)
> vlan_num = 1;
>
> - mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
> + mv_f = calloc(vlan_num, sizeof(*mv_f));
> if (!mv_f) {
> PMD_DRV_LOG(ERR, "failed to allocate memory");
> return I40E_ERR_NO_MEMORY;
> @@ -312,18 +312,18 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
> ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
> &f->mac_info.mac_addr);
> if (ret != I40E_SUCCESS) {
> - rte_free(mv_f);
> + free(mv_f);
> return ret;
> }
> }
>
> ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num);
> if (ret != I40E_SUCCESS) {
> - rte_free(mv_f);
> + free(mv_f);
> return ret;
> }
>
> - rte_free(mv_f);
> + free(mv_f);
> ret = I40E_SUCCESS;
> }
>
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 12/27] net/i40e: avoid rte malloc in VF resource queries
2026-02-11 13:52 ` [PATCH v3 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
@ 2026-02-16 17:09 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:09 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Wed, Feb 11, 2026 at 01:52:54PM +0000, Anatoly Burakov wrote:
> Currently, when responding to VF resource queries, we are dynamically
> allocating a temporary buffer with rte_zmalloc followed by an immediate
> rte_free. This is not needed as the response is only used temporarily,
> so replace it with a stack-allocated structure (the allocation is fixed
> in size and pretty small).
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 13/27] net/i40e: avoid rte malloc in adminq operations
2026-02-11 13:52 ` [PATCH v3 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
@ 2026-02-16 17:11 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:11 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Wed, Feb 11, 2026 at 01:52:55PM +0000, Anatoly Burakov wrote:
> Currently, when processing admin queue messages, we are using rte_zmalloc
> followed by an immediate rte_free. This is not needed as the message
> buffer is only used temporarily within the function scope, so replace it
> with regular malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 14/27] net/i40e: avoid rte malloc in DDP package handling
2026-02-11 13:52 ` [PATCH v3 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
@ 2026-02-16 17:12 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:12 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Wed, Feb 11, 2026 at 01:52:56PM +0000, Anatoly Burakov wrote:
> Currently, when processing Dynamic Driver Profile (DDP) packages and
> checking profile information, we are using rte_zmalloc followed by
> immediate rte_free. This is not needed as these buffers are only used
> temporarily, so replace it with stack-allocated structures or regular
> malloc/free where appropriate.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 15/27] net/i40e: avoid rte malloc in DDP ptype handling
2026-02-11 13:52 ` [PATCH v3 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
@ 2026-02-16 17:13 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:13 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Wed, Feb 11, 2026 at 01:52:57PM +0000, Anatoly Burakov wrote:
> Currently, when updating customized protocol and packet type information
> via DDP packages, we are using rte_zmalloc followed by immediate rte_free.
> This is not needed as these buffers are only used temporarily within
> the function scope, so replace it with regular malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_ethdev.c | 25 ++++++++++++-------------
> 1 file changed, 12 insertions(+), 13 deletions(-)
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering
2026-02-13 10:26 ` [PATCH v4 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
@ 2026-02-16 17:14 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:14 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Fri, Feb 13, 2026 at 10:26:22AM +0000, Anatoly Burakov wrote:
> Currently, when adding, removing, or configuring MAC and VLAN filters,
> we are using rte_zmalloc followed by an immediate rte_free. This is not
> needed as this memory is not being stored anywhere, so replace it with
> regular malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_ethdev.c | 38 +++++++++++++--------------
> drivers/net/intel/i40e/rte_pmd_i40e.c | 16 +++++------
> 2 files changed, 26 insertions(+), 28 deletions(-)
>
As done on v3:
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 12/27] net/i40e: avoid rte malloc in VF resource queries
2026-02-13 10:26 ` [PATCH v4 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
@ 2026-02-16 17:14 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:14 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Fri, Feb 13, 2026 at 10:26:23AM +0000, Anatoly Burakov wrote:
> Currently, when responding to VF resource queries, we are dynamically
> allocating a temporary buffer with rte_zmalloc followed by an immediate
> rte_free. This is not needed as the response is only used temporarily,
> so replace it with a stack-allocated structure (the allocation is fixed
> in size and pretty small).
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
As done on v3:
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 13/27] net/i40e: avoid rte malloc in adminq operations
2026-02-13 10:26 ` [PATCH v4 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
@ 2026-02-16 17:15 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:15 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Fri, Feb 13, 2026 at 10:26:24AM +0000, Anatoly Burakov wrote:
> Currently, when processing admin queue messages, we are using rte_zmalloc
> followed by an immediate rte_free. This is not needed as the message
> buffer is only used temporarily within the function scope, so replace it
> with regular malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
As done on v3:
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 14/27] net/i40e: avoid rte malloc in DDP package handling
2026-02-13 10:26 ` [PATCH v4 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
@ 2026-02-16 17:15 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:15 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Fri, Feb 13, 2026 at 10:26:25AM +0000, Anatoly Burakov wrote:
> Currently, when processing Dynamic Driver Profile (DDP) packages and
> checking profile information, we are using rte_zmalloc followed by
> immediate rte_free. This is not needed as these buffers are only used
> temporarily, so replace it with stack-allocated structures or regular
> malloc/free where appropriate.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
As done on v3:
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 15/27] net/i40e: avoid rte malloc in DDP ptype handling
2026-02-13 10:26 ` [PATCH v4 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
@ 2026-02-16 17:15 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:15 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Fri, Feb 13, 2026 at 10:26:26AM +0000, Anatoly Burakov wrote:
> Currently, when updating customized protocol and packet type information
> via DDP packages, we are using rte_zmalloc followed by immediate rte_free.
> This is not needed as these buffers are only used temporarily within
> the function scope, so replace it with regular malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
As done on v3:
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 16/27] net/iavf: remove remnants of pipeline mode
2026-02-13 10:26 ` [PATCH v4 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
@ 2026-02-16 17:16 ` Bruce Richardson
2026-02-17 12:51 ` Burakov, Anatoly
0 siblings, 1 reply; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:16 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, Feb 13, 2026 at 10:26:27AM +0000, Anatoly Burakov wrote:
> When pipeline mode was removed, some of the things used by pipelines were
> left in the code. Remove them as they are unused.
>
This needs a fixes tag to point to the removal.
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/iavf/iavf_fdir.c | 1 -
> drivers/net/intel/iavf/iavf_fsub.c | 1 -
> drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
> drivers/net/intel/iavf/iavf_hash.c | 1 -
> drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
> 5 files changed, 19 deletions(-)
>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec
2026-02-13 10:26 ` [PATCH v4 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
@ 2026-02-16 17:19 ` Bruce Richardson
2026-02-16 22:25 ` Stephen Hemminger
1 sibling, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:19 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, Feb 13, 2026 at 10:26:28AM +0000, Anatoly Burakov wrote:
> Currently, when calling down into the VF mailbox, IPsec code will use
> dynamic memory allocation (rte_malloc one at that!) to allocate VF message
> structures which are ~40 bytes in size, and then immediately frees them.
> This is wasteful and unnecessary, so use stack allocation instead.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/iavf/iavf_ipsec_crypto.c | 157 +++++++--------------
> 1 file changed, 51 insertions(+), 106 deletions(-)
>
> diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
> index 66eaea8715..cb437d3212 100644
> --- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
> +++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
> @@ -458,36 +458,24 @@ static uint32_t
> iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
> struct rte_security_session_conf *conf)
> {
> - struct inline_ipsec_msg *request = NULL, *response = NULL;
> - struct virtchnl_ipsec_sa_cfg *sa_cfg;
> - size_t request_len, response_len;
> + struct {
> + struct inline_ipsec_msg msg;
> + struct virtchnl_ipsec_sa_cfg sa_cfg;
> + } sa_req;
> + struct {
> + struct inline_ipsec_msg msg;
> + struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp;
> + } sa_resp;
> + struct inline_ipsec_msg *request = &sa_req.msg, *response = &sa_resp.msg;
Nit: Split these across two lines, since the assignments are a little long.
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 18/27] net/iavf: decouple hash uninit from parser uninit
2026-02-13 10:26 ` [PATCH v4 18/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
@ 2026-02-16 17:23 ` Bruce Richardson
2026-02-18 10:32 ` Burakov, Anatoly
0 siblings, 1 reply; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:23 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, Feb 13, 2026 at 10:26:29AM +0000, Anatoly Burakov wrote:
> Currently, parser deinitialization will trigger removal of current RSS
> configuration. This should not be done as part of parser deinitialization,
> but should rather be a separate step in dev close flow.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/iavf/iavf_ethdev.c | 4 ++++
> drivers/net/intel/iavf/iavf_hash.c | 13 +++++++++----
> drivers/net/intel/iavf/iavf_hash.h | 13 +++++++++++++
> 3 files changed, 26 insertions(+), 4 deletions(-)
> create mode 100644 drivers/net/intel/iavf/iavf_hash.h
>
> diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
> index 802e095174..70eb7e7ec5 100644
> --- a/drivers/net/intel/iavf/iavf_ethdev.c
> +++ b/drivers/net/intel/iavf/iavf_ethdev.c
> @@ -35,6 +35,7 @@
> #include "iavf_generic_flow.h"
> #include "rte_pmd_iavf.h"
> #include "iavf_ipsec_crypto.h"
> +#include "iavf_hash.h"
>
> /* devargs */
> #define IAVF_PROTO_XTR_ARG "proto_xtr"
> @@ -2972,6 +2973,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
> /* free iAVF security device context all related resources */
> iavf_security_ctx_destroy(adapter);
>
> + /* remove RSS configuration */
> + iavf_hash_uninit(adapter);
> +
> iavf_flow_flush(dev, NULL);
> iavf_flow_uninit(adapter);
>
> diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
> index a40fed7542..d864998402 100644
> --- a/drivers/net/intel/iavf/iavf_hash.c
> +++ b/drivers/net/intel/iavf/iavf_hash.c
> @@ -22,6 +22,7 @@
> #include "iavf_log.h"
> #include "iavf.h"
> #include "iavf_generic_flow.h"
> +#include "iavf_hash.h"
>
> #define IAVF_PHINT_NONE 0
> #define IAVF_PHINT_GTPU BIT_ULL(0)
> @@ -77,7 +78,7 @@ static int
> iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
> struct rte_flow_error *error);
> static void
> -iavf_hash_uninit(struct iavf_adapter *ad);
> +iavf_hash_uninit_parser(struct iavf_adapter *ad);
> static void
> iavf_hash_free(struct rte_flow *flow);
> static int
> @@ -680,7 +681,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
> .init = iavf_hash_init,
> .create = iavf_hash_create,
> .destroy = iavf_hash_destroy,
> - .uninit = iavf_hash_uninit,
> + .uninit = iavf_hash_uninit_parser,
> .free = iavf_hash_free,
> .type = IAVF_FLOW_ENGINE_HASH,
> };
> @@ -1641,6 +1642,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
> }
>
> static void
> +iavf_hash_uninit_parser(struct iavf_adapter *ad)
> +{
> + iavf_unregister_parser(&iavf_hash_parser, ad);
> +}
> +
> +void
> iavf_hash_uninit(struct iavf_adapter *ad)
> {
> struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
> @@ -1658,8 +1665,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
> rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
> if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
> PMD_DRV_LOG(ERR, "fail to delete default RSS");
> -
> - iavf_unregister_parser(&iavf_hash_parser, ad);
> }
>
> static void
> diff --git a/drivers/net/intel/iavf/iavf_hash.h b/drivers/net/intel/iavf/iavf_hash.h
> new file mode 100644
> index 0000000000..2348f32673
> --- /dev/null
> +++ b/drivers/net/intel/iavf/iavf_hash.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2025 Intel Corporation
> + */
> +
> +#ifndef _IAVF_HASH_H_
> +#define _IAVF_HASH_H_
> +
> +#include "iavf.h"
> +
> +void
> +iavf_hash_uninit(struct iavf_adapter *ad);
> +
> +#endif /* _IAVF_HASH_H_ */
> --
While its primarily a matter of taste, do we really need to create a new
header for this? For a single function prototype can it not just go in
iavf.h directly itself?
Either way:
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 19/27] net/iavf: avoid rte malloc in RSS configuration
2026-02-13 10:26 ` [PATCH v4 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
@ 2026-02-16 17:24 ` Bruce Richardson
2026-02-18 10:45 ` Burakov, Anatoly
0 siblings, 1 reply; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:24 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, Feb 13, 2026 at 10:26:30AM +0000, Anatoly Burakov wrote:
> Currently, when configuring RSS (redirection table, lookup table, and
> hash key), we are using rte_zmalloc followed by an immediate rte_free.
> This is not needed as this memory is not being stored anywhere, so
> replace it with regular malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/iavf/iavf_ethdev.c | 4 ++--
> drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
> index 70eb7e7ec5..d3fa47fd5e 100644
> --- a/drivers/net/intel/iavf/iavf_ethdev.c
> +++ b/drivers/net/intel/iavf/iavf_ethdev.c
> @@ -1554,7 +1554,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
> return -EINVAL;
> }
>
> - lut = rte_zmalloc("rss_lut", reta_size, 0);
> + lut = calloc(1, reta_size);
As with i40e, can we make this (and the key allocation below) static based
on max sizes supported?
> if (!lut) {
> PMD_DRV_LOG(ERR, "No memory can be allocated");
> return -ENOMEM;
> @@ -1574,7 +1574,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
> ret = iavf_configure_rss_lut(adapter);
> if (ret) /* revert back */
> rte_memcpy(vf->rss_lut, lut, reta_size);
> - rte_free(lut);
> + free(lut);
>
> return ret;
> }
> diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
> index 9ad39300c6..55986ef909 100644
> --- a/drivers/net/intel/iavf/iavf_vchnl.c
> +++ b/drivers/net/intel/iavf/iavf_vchnl.c
> @@ -1159,7 +1159,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
> int len, err = 0;
>
> len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
> - rss_lut = rte_zmalloc("rss_lut", len, 0);
> + rss_lut = calloc(1, len);
> if (!rss_lut)
> return -ENOMEM;
>
> @@ -1178,7 +1178,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
> PMD_DRV_LOG(ERR,
> "Failed to execute command of OP_CONFIG_RSS_LUT");
>
> - rte_free(rss_lut);
> + free(rss_lut);
> return err;
> }
>
> @@ -1191,7 +1191,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
> int len, err = 0;
>
> len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
> - rss_key = rte_zmalloc("rss_key", len, 0);
> + rss_key = calloc(1, len);
> if (!rss_key)
> return -ENOMEM;
>
> @@ -1210,7 +1210,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
> PMD_DRV_LOG(ERR,
> "Failed to execute command of OP_CONFIG_RSS_KEY");
>
> - rte_free(rss_key);
> + free(rss_key);
> return err;
> }
>
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-13 10:26 ` [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-16 17:27 ` Bruce Richardson
2026-02-19 9:22 ` Burakov, Anatoly
2026-02-19 13:21 ` Burakov, Anatoly
0 siblings, 2 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:27 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, Feb 13, 2026 at 10:26:31AM +0000, Anatoly Burakov wrote:
> Currently, when adding or deleting MAC addresses, we are using
> rte_zmalloc followed by an immediate rte_free. This is not needed as this
> memory is not being stored anywhere, so replace it with regular
> malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/iavf/iavf_vchnl.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
> index 55986ef909..19dce17612 100644
> --- a/drivers/net/intel/iavf/iavf_vchnl.c
> +++ b/drivers/net/intel/iavf/iavf_vchnl.c
> @@ -1402,7 +1402,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
> }
> }
>
> - list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
> + list = calloc(1, len);
Given the loop above has a threshold set for IAVF_AQ_BUF_SZ, maybe a static
buffer of that fixed size might be better?
Also, that check itself seems a little off, since it allows buffers greater
than the size, rather than ignoring the length of the address that pushes
it over the limit.
> if (!list) {
> PMD_DRV_LOG(ERR, "fail to allocate memory");
> return;
> @@ -1434,7 +1434,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
> PMD_DRV_LOG(ERR, "fail to execute command %s",
> add ? "OP_ADD_ETHER_ADDRESS" :
> "OP_DEL_ETHER_ADDRESS");
> - rte_free(list);
> + free(list);
> begin = next_begin;
> } while (begin < IAVF_NUM_MACADDR_MAX);
> }
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 21/27] net/iavf: avoid rte malloc in IPsec operations
2026-02-13 10:26 ` [PATCH v4 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
@ 2026-02-16 17:30 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:30 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, Feb 13, 2026 at 10:26:32AM +0000, Anatoly Burakov wrote:
> Currently, when performing IPsec security association operations and
> retrieving device capabilities, we are using rte_malloc followed by
> immediate rte_free. This is not needed as these structures are only
> used temporarily, so replace it with stack-allocated structures for
> small fixed-size messages and regular malloc/free for variable-sized
> buffers.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/iavf/iavf_ipsec_crypto.c | 58 ++++++++--------------
> 1 file changed, 21 insertions(+), 37 deletions(-)
>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 22/27] net/iavf: avoid rte malloc in queue operations
2026-02-13 10:26 ` [PATCH v4 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
@ 2026-02-16 17:31 ` Bruce Richardson
2026-02-16 22:24 ` Stephen Hemminger
1 sibling, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:31 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, Feb 13, 2026 at 10:26:33AM +0000, Anatoly Burakov wrote:
> Currently, when enabling, disabling, or switching queues, we are using
> rte_malloc followed by an immediate rte_free. This is not needed as these
> structures are not being stored anywhere, so replace them with stack
> allocation or malloc/free where appropriate.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/iavf/iavf_vchnl.c | 65 ++++++++++++-----------------
> 1 file changed, 26 insertions(+), 39 deletions(-)
>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 23/27] net/iavf: avoid rte malloc in irq map config
2026-02-13 10:26 ` [PATCH v4 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
@ 2026-02-16 17:31 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:31 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, Feb 13, 2026 at 10:26:34AM +0000, Anatoly Burakov wrote:
> Currently, when configuring IRQ maps, we are using rte_zmalloc followed
> by an immediate rte_free. This is not needed as this memory is not being
> stored anywhere, so replace it with regular malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 24/27] net/ice: avoid rte malloc in RSS RETA operations
2026-02-13 10:26 ` [PATCH v4 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-16 17:32 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:32 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Fri, Feb 13, 2026 at 10:26:35AM +0000, Anatoly Burakov wrote:
> Currently, when updating or querying RSS redirection table (RETA), we
> are using rte_zmalloc followed by an immediate rte_free. This is not
> needed as this memory is not being stored anywhere, so replace it with
> regular malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
> drivers/net/intel/ice/ice_ethdev.c | 8 ++++----
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
Same comment as with the other reta changes. With or without that taken
into account:
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 25/27] net/ice: avoid rte malloc in MAC address operations
2026-02-13 10:26 ` [PATCH v4 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-16 17:33 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:33 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Fri, Feb 13, 2026 at 10:26:36AM +0000, Anatoly Burakov wrote:
> Currently, when adding or deleting MAC addresses, we are using
> rte_zmalloc followed by an immediate rte_free. This is not needed as this
> memory is not being stored anywhere, so replace it with regular
> calloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 26/27] net/ice: avoid rte malloc in raw pattern parsing
2026-02-13 10:26 ` [PATCH v4 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
@ 2026-02-16 17:34 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:34 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Fri, Feb 13, 2026 at 10:26:37AM +0000, Anatoly Burakov wrote:
> Currently, when parsing raw flow patterns, we are using rte_zmalloc
> followed by an immediate rte_free. This is not needed as this memory is
> not being stored anywhere, so replace it with regular malloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/ice/ice_fdir_filter.c | 14 +++++++-------
> drivers/net/intel/ice/ice_hash.c | 10 +++++-----
> 2 files changed, 12 insertions(+), 12 deletions(-)
>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 27/27] net/ice: avoid rte malloc in flow pattern match
2026-02-13 10:26 ` [PATCH v4 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
@ 2026-02-16 17:37 ` Bruce Richardson
2026-02-19 13:07 ` Burakov, Anatoly
0 siblings, 1 reply; 297+ messages in thread
From: Bruce Richardson @ 2026-02-16 17:37 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Fri, Feb 13, 2026 at 10:26:38AM +0000, Anatoly Burakov wrote:
> Currently, when allocating buffers for pattern match items and flow item
> storage, we are using rte_zmalloc followed by immediate rte_free. This is
> not needed as these buffers are only used temporarily within the function
> scope, so replace it with regular calloc/free.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/ice/ice_acl_filter.c | 3 ++-
> drivers/net/intel/ice/ice_fdir_filter.c | 5 +++--
> drivers/net/intel/ice/ice_generic_flow.c | 15 +++++++--------
> drivers/net/intel/ice/ice_hash.c | 3 ++-
> drivers/net/intel/ice/ice_switch_filter.c | 5 +++--
> 5 files changed, 17 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c
> index 38e30a4f62..6754a40044 100644
> --- a/drivers/net/intel/ice/ice_acl_filter.c
> +++ b/drivers/net/intel/ice/ice_acl_filter.c
> @@ -9,6 +9,7 @@
> #include <string.h>
> #include <unistd.h>
> #include <stdarg.h>
> +#include <stdlib.h>
> #include <rte_debug.h>
> #include <rte_ether.h>
> #include <ethdev_driver.h>
> @@ -1009,7 +1010,7 @@ ice_acl_parse(struct ice_adapter *ad,
> *meta = filter;
>
> error:
> - rte_free(item);
> + free(item);
> return ret;
> }
Should this code be reworked so that the error is propagated back to caller
and the item freed there so as allocation and freeing occur together in the
one function - or even in the same file?
>
> diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
> index 5f44b5c818..8cca831fa9 100644
> --- a/drivers/net/intel/ice/ice_fdir_filter.c
> +++ b/drivers/net/intel/ice/ice_fdir_filter.c
> @@ -3,6 +3,7 @@
> */
>
> #include <stdio.h>
> +#include <stdlib.h>
> #include <rte_flow.h>
> #include <rte_hash.h>
> #include <rte_hash_crc.h>
> @@ -2504,11 +2505,11 @@ ice_fdir_parse(struct ice_adapter *ad,
> rte_free(filter->pkt_buf);
> }
>
> - rte_free(item);
> + free(item);
> return ret;
> error:
> rte_free(filter->pkt_buf);
> - rte_free(item);
> + free(item);
> return ret;
> }
>
> diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c
> index 4049157eab..3f7a9f4714 100644
> --- a/drivers/net/intel/ice/ice_generic_flow.c
> +++ b/drivers/net/intel/ice/ice_generic_flow.c
> @@ -9,6 +9,7 @@
> #include <string.h>
> #include <unistd.h>
> #include <stdarg.h>
> +#include <stdlib.h>
>
> #include <rte_ether.h>
> #include <ethdev_driver.h>
> @@ -2136,19 +2137,17 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
> }
> item_num++;
>
> - items = rte_zmalloc("ice_pattern",
> - item_num * sizeof(struct rte_flow_item), 0);
> + items = calloc(item_num, sizeof(struct rte_flow_item));
> if (!items) {
<snip for brevity>
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 22/27] net/iavf: avoid rte malloc in queue operations
2026-02-13 10:26 ` [PATCH v4 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
2026-02-16 17:31 ` Bruce Richardson
@ 2026-02-16 22:24 ` Stephen Hemminger
1 sibling, 0 replies; 297+ messages in thread
From: Stephen Hemminger @ 2026-02-16 22:24 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, 13 Feb 2026 10:26:33 +0000
Anatoly Burakov <anatoly.burakov@intel.com> wrote:
> Currently, when enabling, disabling, or switching queues, we are using
> rte_malloc followed by an immediate rte_free. This is not needed as these
> structures are not being stored anywhere, so replace them with stack
> allocation or malloc/free where appropriate.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
AI caught something here.
Patch 22/27 — net/iavf: avoid rte malloc in queue operations
Error (Correctness — Uninitialized stack memory)
In iavf_switch_queue_lv(), the stack-allocated queue_req is not zero-initialized:

c
} queue_req; /* <-- no "= {0}" */
The original code used rte_zmalloc which zeroes the buffer. The other two functions in this same patch (iavf_enable_queues_lv and iavf_disable_queues_lv) correctly use = {0}, but iavf_switch_queue_lv does not. The structure is then sent as a virtchnl message via args.in_args, meaning uninitialized stack bytes will be transmitted to the PF driver. While individual fields like num_chunks, vport_id, queue_type, start_queue_id, and num_queues are set explicitly, padding bytes and any other fields within the struct will contain garbage.
Additionally, this function previously allocated only sizeof(struct virtchnl_del_ena_dis_queues) (which includes exactly 1 chunk), but the new stack struct includes IAVF_RXTX_QUEUE_CHUNKS_NUM - 1 additional chunks. Since num_chunks is set to 1, the extra space is harmless — but passing sizeof(queue_req) as in_args_size now sends a larger buffer than before. Compare with the enable/disable functions which also have the extra chunks but actually use IAVF_RXTX_QUEUE_CHUNKS_NUM chunks. The size change is likely benign but worth noting.
Fix: Add = {0} to the declaration, matching the pattern in the sibling functions in the same patch:

c
} queue_req = {0};
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec
2026-02-13 10:26 ` [PATCH v4 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
2026-02-16 17:19 ` Bruce Richardson
@ 2026-02-16 22:25 ` Stephen Hemminger
1 sibling, 0 replies; 297+ messages in thread
From: Stephen Hemminger @ 2026-02-16 22:25 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin
On Fri, 13 Feb 2026 10:26:28 +0000
Anatoly Burakov <anatoly.burakov@intel.com> wrote:
> Currently, when calling down into the VF mailbox, IPsec code will use
> dynamic memory allocation (rte_malloc one at that!) to allocate VF message
> structures which are ~40 bytes in size, and then immediately frees them.
> This is wasteful and unnecessary, so use stack allocation instead.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
AI spotted this:
Patch 17/27 — net/iavf: avoid rte malloc in VF mailbox for IPsec
Warning (Correctness — Uninitialized stack memory)
The stack-allocated anonymous structs (sa_req, sa_resp, sp_req, sp_resp) are not zero-initialized. The originals used rte_malloc (not rte_zmalloc), so this isn't a regression per se — the old code also didn't zero the buffers. However, since these are sent as virtchnl messages, it would be safer to add = {0} for defense in depth, particularly since individual field assignment may miss padding.
This is lower priority than patch 22 since the old code had the same issue.
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (14 preceding siblings ...)
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 12:13 ` [PATCH v5 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
` (27 more replies)
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (3 subsequent siblings)
19 siblings, 28 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev
This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
IXGBE:
- Remove unnecessary macros and #ifdef's
- Disentangle unrelated flow API code paths
I40E:
- Get rid of global variables and unnecessary allocations
- Reduce code duplication around default RSS keys
- Use more appropriate integer types and definitions
IAVF:
- Remove dead code
- Remove unnecessary allocations
- Separate RSS uninit from hash flow parser uninit
ICE:
- Remove unnecessary allocations
This is done in preparation for further rework.
Note that this patchset depends on driver bug fix patchset [1] (already integrated into next-net-intel).
[1] https://patches.dpdk.org/project/dpdk/list/?series=37350
v1 -> v2:
- Added more cleanups around rte_malloc usage
v2 -> v3:
- Reworded some commit messages
- Added a new patch for ICE
- Rebased on latest bug fix patches
v3 -> v4:
- Rebased on latest bugfix patchset
v4 -> v5:
- Adjusted typing for queue size
- Fixed missing zero initializations for stack allocations
Anatoly Burakov (27):
net/ixgbe: remove MAC type check macros
net/ixgbe: remove security-related ifdefery
net/ixgbe: split security and ntuple filters
net/i40e: get rid of global filter variables
net/i40e: make default RSS key global
net/i40e: use unsigned types for queue comparisons
net/i40e: use proper flex len define
net/i40e: remove global pattern variable
net/i40e: avoid rte malloc in tunnel set
net/i40e: avoid rte malloc in RSS RETA operations
net/i40e: avoid rte malloc in MAC/VLAN filtering
net/i40e: avoid rte malloc in VF resource queries
net/i40e: avoid rte malloc in adminq operations
net/i40e: avoid rte malloc in DDP package handling
net/i40e: avoid rte malloc in DDP ptype handling
net/iavf: remove remnants of pipeline mode
net/iavf: avoid rte malloc in VF mailbox for IPsec
net/iavf: decouple hash uninit from parser uninit
net/iavf: avoid rte malloc in RSS configuration
net/iavf: avoid rte malloc in MAC address operations
net/iavf: avoid rte malloc in IPsec operations
net/iavf: avoid rte malloc in queue operations
net/iavf: avoid rte malloc in irq map config
net/ice: avoid rte malloc in RSS RETA operations
net/ice: avoid rte malloc in MAC address operations
net/ice: avoid rte malloc in raw pattern parsing
net/ice: avoid rte malloc in flow pattern match
drivers/net/intel/i40e/i40e_ethdev.c | 232 +++++++++----------
drivers/net/intel/i40e/i40e_ethdev.h | 26 ++-
drivers/net/intel/i40e/i40e_flow.c | 146 ++++++------
drivers/net/intel/i40e/i40e_hash.c | 27 ++-
drivers/net/intel/i40e/i40e_hash.h | 3 +
drivers/net/intel/i40e/i40e_pf.c | 26 +--
drivers/net/intel/i40e/rte_pmd_i40e.c | 59 ++---
drivers/net/intel/iavf/iavf_ethdev.c | 8 +-
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 --
drivers/net/intel/iavf/iavf_hash.c | 14 +-
drivers/net/intel/iavf/iavf_hash.h | 13 ++
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 216 ++++++-----------
drivers/net/intel/iavf/iavf_vchnl.c | 84 +++----
drivers/net/intel/ice/ice_acl_filter.c | 3 +-
drivers/net/intel/ice/ice_dcf_ethdev.c | 8 +-
drivers/net/intel/ice/ice_ethdev.c | 8 +-
drivers/net/intel/ice/ice_fdir_filter.c | 19 +-
drivers/net/intel/ice/ice_generic_flow.c | 15 +-
drivers/net/intel/ice/ice_hash.c | 13 +-
drivers/net/intel/ice/ice_switch_filter.c | 5 +-
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 -
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 16 --
drivers/net/intel/ixgbe/ixgbe_flow.c | 220 +++++++++++-------
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 ---
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 -
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -
28 files changed, 541 insertions(+), 679 deletions(-)
create mode 100644 drivers/net/intel/iavf/iavf_hash.h
--
2.47.3
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v5 01/27] net/ixgbe: remove MAC type check macros
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 12:13 ` [PATCH v5 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
` (26 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The macros used were not informative and did not add any value beyond code
golf, so remove them and make MAC type checks explicit.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
2 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 5dbd659941..7dc02a472b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -137,18 +137,6 @@
#define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
#define IXGBE_MAX_L2_TN_FILTER_NUM 128
-#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
- return -ENOTSUP;\
-} while (0)
-
-#define MAC_TYPE_FILTER_SUP(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
- (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
- (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
- return -ENOTSUP;\
-} while (0)
-
/* Link speed for X550 auto negotiation */
#define IXGBE_LINK_SPEED_X550_AUTONEG (IXGBE_LINK_SPEED_100_FULL | \
IXGBE_LINK_SPEED_1GB_FULL | \
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 6a7edc6377..c8d6237f27 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -654,7 +654,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP_EXT(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540)
+ return -ENOTSUP;
ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
@@ -894,7 +896,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_ethertype_filter(attr, pattern,
actions, filter, error);
@@ -1183,7 +1191,13 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_syn_filter(attr, pattern,
actions, filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 02/27] net/ixgbe: remove security-related ifdefery
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-17 12:13 ` [PATCH v5 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 12:13 ` [PATCH v5 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
` (25 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The security library is specified as explicit dependency for ixgbe, so
there is no more need to gate features behind #ifdef blocks that depend
on presence of this library.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 ------
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ---
drivers/net/intel/ixgbe/ixgbe_flow.c | 6 -----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --------------------
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 --
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -----
6 files changed, 52 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 11500a923c..57d929cf2c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -32,9 +32,7 @@
#include <rte_random.h>
#include <dev_driver.h>
#include <rte_hash_crc.h>
-#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
-#endif
#include <rte_os_shim.h>
#include "ixgbe_logs.h"
@@ -1177,11 +1175,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ixgbe_swfw_lock_reset(hw);
-#ifdef RTE_LIB_SECURITY
/* Initialize security_ctx only for primary process*/
if (ixgbe_ipsec_ctx_create(eth_dev))
return -ENOMEM;
-#endif
/* Initialize DCB configuration*/
memset(dcb_config, 0, sizeof(struct ixgbe_dcb_config));
@@ -1362,10 +1358,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
rte_free(eth_dev->data->hash_mac_addrs);
eth_dev->data->hash_mac_addrs = NULL;
err_exit:
-#ifdef RTE_LIB_SECURITY
rte_free(eth_dev->security_ctx);
eth_dev->security_ctx = NULL;
-#endif
return ret;
}
@@ -3148,10 +3142,8 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(dev);
-#ifdef RTE_LIB_SECURITY
rte_free(dev->security_ctx);
dev->security_ctx = NULL;
-#endif
return ret;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 7dc02a472b..32d7b98ed1 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -14,9 +14,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
-#ifdef RTE_LIB_SECURITY
#include "ixgbe_ipsec.h"
-#endif
#include <rte_flow.h>
#include <rte_time.h>
#include <rte_hash.h>
@@ -480,9 +478,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-#ifdef RTE_LIB_SECURITY
struct ixgbe_ipsec ipsec;
-#endif
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index c8d6237f27..491e8bccc5 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,7 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
-#ifdef RTE_LIB_SECURITY
/**
* Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
*/
@@ -282,7 +281,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
}
-#endif
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -663,11 +661,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (filter->proto == IPPROTO_ESP)
return 0;
-#endif
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
@@ -3107,7 +3103,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (ntuple_filter.proto == IPPROTO_ESP) {
if (ret != 0)
@@ -3115,7 +3110,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
flow->is_security = true;
return flow;
}
-#endif
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
index 2857c19355..71deda9ed6 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
@@ -459,7 +459,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
-#ifdef RTE_LIB_SECURITY
if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
union ixgbe_crypto_tx_desc_md *md =
(union ixgbe_crypto_tx_desc_md *)mdata;
@@ -473,7 +472,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
tx_offload_mask.sa_idx |= ~0;
tx_offload_mask.sec_pad_len |= ~0;
}
-#endif
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -652,9 +650,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec;
-#endif
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -682,9 +678,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
-#ifdef RTE_LIB_SECURITY
use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
-#endif
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -696,7 +690,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
-#ifdef RTE_LIB_SECURITY
if (use_ipsec) {
union ixgbe_crypto_tx_desc_md *ipsec_mdata =
(union ixgbe_crypto_tx_desc_md *)
@@ -704,7 +697,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.sa_idx = ipsec_mdata->sa_idx;
tx_offload.sec_pad_len = ipsec_mdata->pad_len;
}
-#endif
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -896,10 +888,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
-#ifdef RTE_LIB_SECURITY
if (use_ipsec)
olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
-#endif
m_seg = tx_pkt;
do {
@@ -1523,13 +1513,11 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
}
-#ifdef RTE_LIB_SECURITY
if (rx_status & IXGBE_RXD_STAT_SECP) {
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
}
-#endif
return pkt_flags;
}
@@ -2490,9 +2478,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST) {
if (txq->tx_rs_thresh <= IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
@@ -2648,9 +2634,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
(txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = rte_eth_tx_pkt_prepare_dummy;
@@ -2711,10 +2695,8 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
-#endif
return tx_offload_capa;
}
@@ -2884,10 +2866,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
-#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
RTE_ETH_TX_OFFLOAD_SECURITY);
-#endif
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -3171,10 +3151,8 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
-#endif
return offloads;
}
@@ -5102,10 +5080,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ci_rx_queue *rxq = dev->data->rx_queues[i];
rxq->vector_rx = rx_using_sse;
-#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY);
-#endif
}
}
@@ -5611,7 +5587,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
ixgbe_setup_loopback_link_x540_x550(hw, true);
}
-#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
@@ -5624,7 +5599,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
return ret;
}
}
-#endif
return 0;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.h b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
index 7950e56ee4..33023a3580 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
@@ -99,11 +99,9 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
-#ifdef RTE_LIB_SECURITY
/* inline ipsec related*/
uint64_t sa_idx:8; /**< TX SA database entry index */
uint64_t sec_pad_len:4; /**< padding length */
-#endif
};
};
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
index dca3a20ca0..3f37038e5c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -21,7 +21,6 @@ ixgbe_rxq_rearm(struct ci_rx_queue *rxq)
ci_rxq_rearm(rxq, CI_RX_VEC_LEVEL_SSE);
}
-#ifdef RTE_LIB_SECURITY
static inline void
desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
@@ -56,7 +55,6 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
*rearm2 = _mm_extract_epi32(rearm, 2);
*rearm3 = _mm_extract_epi32(rearm, 3);
}
-#endif
static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
@@ -265,9 +263,7 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ci_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec = rxq->using_ipsec;
-#endif
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -444,10 +440,8 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
desc_to_olflags_v(descs, mbuf_init, vlan_flags, udp_p_flag,
&rx_pkts[pos]);
-#ifdef RTE_LIB_SECURITY
if (unlikely(use_ipsec))
desc_to_olflags_v_ipsec(descs, &rx_pkts[pos]);
-#endif
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 03/27] net/ixgbe: split security and ntuple filters
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-17 12:13 ` [PATCH v5 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-17 12:13 ` [PATCH v5 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 16:59 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
` (24 subsequent siblings)
27 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
These filters are mashed together even though they almost do not share any
code at all between each other. Separate security filter from ntuple filter
and parse it separately.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 194 ++++++++++++++++-----------
1 file changed, 114 insertions(+), 80 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 491e8bccc5..78f40b5c37 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,74 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- act = next_no_void_action(actions, NULL);
- if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- const void *conf = act->conf;
- const struct rte_flow_action_security *sec_act;
- struct rte_security_session *session;
- struct ip_spec spec;
-
- if (conf == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- act, "NULL security conf.");
- return -rte_errno;
- }
- /* check if the next not void item is END */
- act = next_no_void_action(actions, act);
- if (act->type != RTE_FLOW_ACTION_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "Not supported action.");
- return -rte_errno;
- }
-
- /* get the IP pattern*/
- item = next_no_void_pattern(pattern, NULL);
- while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- if (item->last ||
- item->type == RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "IP pattern missing.");
- return -rte_errno;
- }
- item = next_no_void_pattern(pattern, item);
- }
- if (item->spec == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
- "NULL IP pattern.");
- return -rte_errno;
- }
-
- filter->proto = IPPROTO_ESP;
- sec_act = (const struct rte_flow_action_security *)conf;
- spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
- if (spec.is_ipv6) {
- const struct rte_flow_item_ipv6 *ipv6 = item->spec;
- spec.spec.ipv6 = *ipv6;
- } else {
- const struct rte_flow_item_ipv4 *ipv4 = item->spec;
- spec.spec.ipv4 = *ipv4;
- }
-
- /*
- * we get pointer to security session from security action,
- * which is const. however, we do need to act on the session, so
- * either we do some kind of pointer based lookup to get session
- * pointer internally (which quickly gets unwieldy for lots of
- * flows case), or we simply cast away constness.
- */
- session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
- return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
- }
-
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -640,6 +572,104 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return 0;
}
+static int
+ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct rte_flow_action_security *security;
+ struct rte_security_session *session;
+ const struct rte_flow_item *item;
+ const struct rte_flow_action *act;
+ struct ip_spec spec;
+
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error,
+ EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "NULL pattern.");
+ return -rte_errno;
+ }
+ if (actions == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "NULL action.");
+ return -rte_errno;
+ }
+ if (attr == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR,
+ NULL, "NULL attribute.");
+ return -rte_errno;
+ }
+
+ /* check if next non-void action is security */
+ act = next_no_void_action(actions, NULL);
+ if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+ security = act->conf;
+ if (security == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "NULL security action config.");
+ }
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+ if (item->spec == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
+ "NULL IP pattern.");
+ return -rte_errno;
+ }
+ spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
+ if (spec.is_ipv6) {
+ const struct rte_flow_item_ipv6 *ipv6 = item->spec;
+ spec.spec.ipv6 = *ipv6;
+ } else {
+ const struct rte_flow_item_ipv4 *ipv4 = item->spec;
+ spec.spec.ipv4 = *ipv4;
+ }
+
+ /*
+ * we get pointer to security session from security action,
+ * which is const. however, we do need to act on the session, so
+ * either we do some kind of pointer based lookup to get session
+ * pointer internally (which quickly gets unwieldy for lots of
+ * flows case), or we simply cast away constness.
+ */
+ session = RTE_CAST_PTR(struct rte_security_session *, security->security_session);
+ return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
+}
+
/* a specific function for ixgbe because the flags is specific */
static int
ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
@@ -661,10 +691,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- /* ESP flow not really a flow*/
- if (filter->proto == IPPROTO_ESP)
- return 0;
-
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -3099,18 +3125,19 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
-
- /* ESP flow not really a flow*/
- if (ntuple_filter.proto == IPPROTO_ESP) {
- if (ret != 0)
- goto out;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret) {
flow->is_security = true;
return flow;
}
+ memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
+ actions, &ntuple_filter, error);
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
@@ -3334,6 +3361,13 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
+ return 0;
+
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 04/27] net/i40e: get rid of global filter variables
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (2 preceding siblings ...)
2026-02-17 12:13 ` [PATCH v5 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 16:59 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 05/27] net/i40e: make default RSS key global Anatoly Burakov
` (23 subsequent siblings)
27 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, i40e driver relies on global state to work around the fact that
`rte_flow_validate()` is being called directly from `rte_flow_create()`,
and it not being possible to pass state between two functions. Fix that by
making a small wrapper around validation that will create a dummy context.
Additionally, tunnel filter doesn't appear to be used by anything and so is
omitted from the structure.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 16 ++--
drivers/net/intel/i40e/i40e_flow.c | 117 ++++++++++++++-------------
2 files changed, 68 insertions(+), 65 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index cab6d7e9dc..0de036f2d9 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1304,12 +1304,14 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
-union i40e_filter_t {
- struct rte_eth_ethertype_filter ethertype_filter;
- struct i40e_fdir_filter_conf fdir_filter;
- struct rte_eth_tunnel_filter_conf tunnel_filter;
- struct i40e_tunnel_filter_conf consistent_tunnel_filter;
- struct i40e_rte_flow_rss_conf rss_conf;
+struct i40e_filter_ctx {
+ union {
+ struct rte_eth_ethertype_filter ethertype_filter;
+ struct i40e_fdir_filter_conf fdir_filter;
+ struct i40e_tunnel_filter_conf consistent_tunnel_filter;
+ struct i40e_rte_flow_rss_conf rss_conf;
+ };
+ enum rte_filter_type type;
};
typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
@@ -1317,7 +1319,7 @@ typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
struct i40e_valid_pattern {
enum rte_flow_item_type *items;
parse_filter_t parse_filter;
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2374b9bbca..e611de0c06 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -80,37 +80,37 @@ static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
struct i40e_ethertype_filter *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
@@ -124,7 +124,7 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int
i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
@@ -136,7 +136,7 @@ static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
const struct rte_flow_ops i40e_flow_ops = {
.validate = i40e_flow_validate,
.create = i40e_flow_create,
@@ -145,8 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-static union i40e_filter_t cons_filter;
-static enum rte_filter_type cons_filter_type = RTE_ETH_FILTER_NONE;
/* internal pattern w/o VOID items */
struct rte_flow_item g_items[32];
@@ -1454,10 +1452,9 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct rte_eth_ethertype_filter *ethertype_filter =
- &filter->ethertype_filter;
+ struct rte_eth_ethertype_filter *ethertype_filter = &filter->ethertype_filter;
int ret;
ret = i40e_flow_parse_ethertype_pattern(dev, pattern, error,
@@ -1474,7 +1471,7 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_ETHERTYPE;
+ filter->type = RTE_ETH_FILTER_ETHERTYPE;
return ret;
}
@@ -2549,7 +2546,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
struct i40e_fdir_filter_conf *fdir_filter = &filter->fdir_filter;
int ret;
@@ -2566,7 +2563,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_FDIR;
+ filter->type = RTE_ETH_FILTER_FDIR;
return 0;
}
@@ -2834,10 +2831,9 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_l4_pattern(pattern, error, tunnel_filter);
@@ -2852,7 +2848,7 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3086,10 +3082,9 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_vxlan_pattern(dev, pattern,
@@ -3105,7 +3100,7 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3338,10 +3333,9 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_nvgre_pattern(dev, pattern,
@@ -3357,7 +3351,7 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3495,10 +3489,9 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_mpls_pattern(dev, pattern,
@@ -3514,7 +3507,7 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3648,10 +3641,9 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_gtp_pattern(dev, pattern,
@@ -3667,7 +3659,7 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3766,10 +3758,9 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_qinq_pattern(dev, pattern,
@@ -3785,16 +3776,17 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
static int
-i40e_flow_validate(struct rte_eth_dev *dev,
+i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
+ struct i40e_filter_ctx *filter_ctx,
struct rte_flow_error *error)
{
struct rte_flow_item *items; /* internal pattern w/o VOID items */
@@ -3823,7 +3815,6 @@ i40e_flow_validate(struct rte_eth_dev *dev,
NULL, "NULL attribute.");
return -rte_errno;
}
- memset(&cons_filter, 0, sizeof(cons_filter));
/* Get the non-void item of action */
while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID)
@@ -3834,9 +3825,8 @@ i40e_flow_validate(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_HASH;
- return i40e_hash_parse(dev, pattern, actions + i,
- &cons_filter.rss_conf, error);
+ filter_ctx->type = RTE_ETH_FILTER_HASH;
+ return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error);
}
i = 0;
@@ -3878,8 +3868,7 @@ i40e_flow_validate(struct rte_eth_dev *dev,
}
if (parse_filter)
- ret = parse_filter(dev, attr, items, actions,
- error, &cons_filter);
+ ret = parse_filter(dev, attr, items, actions, error, filter_ctx);
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
@@ -3890,6 +3879,19 @@ i40e_flow_validate(struct rte_eth_dev *dev,
return ret;
}
+static int
+i40e_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ /* creates dummy context */
+ struct i40e_filter_ctx filter_ctx = {0};
+
+ return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
+}
+
static struct rte_flow *
i40e_flow_create(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -3898,15 +3900,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_filter_ctx filter_ctx = {0};
struct rte_flow *flow = NULL;
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
- ret = i40e_flow_validate(dev, attr, pattern, actions, error);
+ ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
if (ret < 0)
return NULL;
- if (cons_filter_type == RTE_ETH_FILTER_FDIR) {
+ if (filter_ctx.type == RTE_ETH_FILTER_FDIR) {
/* if this is the first time we're creating an fdir flow */
if (pf->fdir.fdir_vsi == NULL) {
ret = i40e_fdir_setup(pf);
@@ -3947,18 +3950,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
}
- switch (cons_filter_type) {
+ switch (filter_ctx.type) {
case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_set(pf,
- &cons_filter.ethertype_filter, 1);
+ ret = i40e_ethertype_filter_set(pf, &filter_ctx.ethertype_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->ethertype.ethertype_list,
i40e_ethertype_filter_list);
break;
case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev,
- &cons_filter.fdir_filter, 1);
+ ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
@@ -3966,14 +3967,14 @@ i40e_flow_create(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_dev_consistent_tunnel_filter_set(pf,
- &cons_filter.consistent_tunnel_filter, 1);
+ &filter_ctx.consistent_tunnel_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->tunnel.tunnel_list,
i40e_tunnel_filter_list);
break;
case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_create(pf, &cons_filter.rss_conf);
+ ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->rss_config_list,
@@ -3983,7 +3984,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
goto free_flow;
}
- flow->filter_type = cons_filter_type;
+ flow->filter_type = filter_ctx.type;
TAILQ_INSERT_TAIL(&pf->flow_list, flow, node);
return flow;
@@ -3992,7 +3993,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
"Failed to create flow.");
- if (cons_filter_type != RTE_ETH_FILTER_FDIR)
+ if (filter_ctx.type != RTE_ETH_FILTER_FDIR)
rte_free(flow);
else
i40e_fdir_entry_pool_put(fdir_info, flow);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 05/27] net/i40e: make default RSS key global
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (3 preceding siblings ...)
2026-02-17 12:13 ` [PATCH v5 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 17:06 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
` (22 subsequent siblings)
27 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, there are multiple places where we need a default RSS key, but
in each of those places we define it as a local variable. Make it global
constant, and adjust all callers to use the global constant. When dealing
with adminq, we cannot send down the constant because adminq commands do
not guarantee const-ness, so copy RSS key into a local buffer before
sending it down to hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 22 ++++++++++------------
drivers/net/intel/i40e/i40e_hash.c | 23 +++++++++++++++++------
drivers/net/intel/i40e/i40e_hash.h | 3 +++
3 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index c8153f3351..2deb87b01b 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9082,23 +9082,21 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
int
i40e_pf_reset_rss_key(struct i40e_pf *pf)
{
- const uint8_t key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- uint8_t *rss_key;
+ uint8_t key_buf[I40E_RSS_KEY_LEN];
+ const uint8_t *rss_key;
/* Reset key */
rss_key = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key;
- if (!rss_key ||
- pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < key_len) {
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+ if (!rss_key || pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < sizeof(key_buf))
+ rss_key = i40e_rss_key_default;
- rss_key = (uint8_t *)rss_key_default;
- }
+ /*
+ * adminq does not guarantee const-ness of RSS key once a command is sent down, so make a
+ * local copy.
+ */
+ memcpy(&key_buf, rss_key, sizeof(key_buf));
- return i40e_set_rss_key(pf->main_vsi, rss_key, key_len);
+ return i40e_set_rss_key(pf->main_vsi, key_buf, sizeof(key_buf));
}
static int
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index 3149682197..f20b40e7d0 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -233,6 +233,22 @@ struct i40e_hash_match_pattern {
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+const uint8_t i40e_rss_key_default[] = {
+ 0x44, 0x39, 0x79, 0x6b,
+ 0xb5, 0x4c, 0x50, 0x23,
+ 0xb6, 0x75, 0xea, 0x5b,
+ 0x12, 0x4f, 0x9f, 0x30,
+ 0xb8, 0xa2, 0xc0, 0x3d,
+ 0xdf, 0xdc, 0x4d, 0x02,
+ 0xa0, 0x8c, 0x9b, 0x33,
+ 0x4a, 0xf6, 0x4a, 0x4c,
+ 0x05, 0xc6, 0xfa, 0x34,
+ 0x39, 0x58, 0xd8, 0x55,
+ 0x7d, 0x99, 0x58, 0x3a,
+ 0xe1, 0x38, 0xc9, 0x2e,
+ 0x81, 0x15, 0x03, 0x66
+};
+
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
*/
@@ -910,17 +926,12 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act,
const uint8_t *key = rss_act->key;
if (!key || rss_act->key_len != sizeof(rss_conf->key)) {
- const uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
-
if (rss_act->key_len != sizeof(rss_conf->key))
PMD_DRV_LOG(WARNING,
"RSS key length invalid, must be %u bytes, now set key to default",
(uint32_t)sizeof(rss_conf->key));
- memcpy(rss_conf->key, rss_key_default, sizeof(rss_conf->key));
+ memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key));
} else {
memcpy(rss_conf->key, key, sizeof(rss_conf->key));
}
diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h
index ff8c91c030..2513d84565 100644
--- a/drivers/net/intel/i40e/i40e_hash.h
+++ b/drivers/net/intel/i40e/i40e_hash.h
@@ -27,6 +27,9 @@ int i40e_hash_filter_destroy(struct i40e_pf *pf,
const struct i40e_rss_filter *rss_filter);
int i40e_hash_filter_flush(struct i40e_pf *pf);
+#define I40E_RSS_KEY_LEN ((I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t))
+extern const uint8_t i40e_rss_key_default[I40E_RSS_KEY_LEN];
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 06/27] net/i40e: use unsigned types for queue comparisons
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (4 preceding siblings ...)
2026-02-17 12:13 ` [PATCH v5 05/27] net/i40e: make default RSS key global Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 17:09 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 07/27] net/i40e: use proper flex len define Anatoly Burakov
` (21 subsequent siblings)
27 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when we compare queue numbers against maximum traffic class
value of 64, we do not use unsigned values, which results in compiler
warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
value. Make it unsigned 16-bit, and adjust callers to use correct types.
As a consequence, `i40e_align_floor` now returns unsigned value as well -
this is correct, because nothing about that function implies signed usage
being a valid use case.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 11 ++++++-----
drivers/net/intel/i40e/i40e_ethdev.h | 8 ++++----
drivers/net/intel/i40e/i40e_hash.c | 4 ++--
3 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 2deb87b01b..608a6cff4d 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8978,11 +8978,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
/* Calculate the maximum number of contiguous PF queues that are configured */
-int
+uint16_t
i40e_pf_calc_configured_queues_num(struct i40e_pf *pf)
{
struct rte_eth_dev_data *data = pf->dev_data;
- int i, num;
+ int i;
+ uint16_t num;
struct ci_rx_queue *rxq;
num = 0;
@@ -9058,7 +9059,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
struct i40e_hw *hw = &pf->adapter->hw;
uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
- int num;
+ uint16_t num;
/* If both VMDQ and RSS enabled, not all of PF queues are
* configured. It's necessary to calculate the actual PF
@@ -9074,7 +9075,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
return 0;
for (i = 0; i < hw->func_caps.rss_table_size; i++)
- lut[i] = (uint8_t)(i % (uint32_t)num);
+ lut[i] = (uint8_t)(i % num);
return i40e_set_rss_lut(pf->main_vsi, lut, (uint16_t)i);
}
@@ -10771,7 +10772,7 @@ i40e_vsi_update_queue_mapping(struct i40e_vsi *vsi,
PMD_INIT_LOG(ERR, " number of queues is less that tcs.");
return I40E_ERR_INVALID_QP_ID;
}
- qpnum_per_tc = RTE_MIN(i40e_align_floor(qpnum_per_tc),
+ qpnum_per_tc = RTE_MIN((uint16_t)i40e_align_floor(qpnum_per_tc),
I40E_MAX_Q_PER_TC);
bsf = rte_bsf32(qpnum_per_tc);
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 0de036f2d9..ca6638b32c 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -24,7 +24,7 @@
#define I40E_AQ_LEN 32
#define I40E_AQ_BUF_SZ 4096
/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
-#define I40E_MAX_Q_PER_TC 64
+#define I40E_MAX_Q_PER_TC UINT16_C(64)
#define I40E_NUM_DESC_DEFAULT 512
#define I40E_NUM_DESC_ALIGN 32
#define I40E_BUF_SIZE_MIN 1024
@@ -1456,7 +1456,7 @@ int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
void i40e_flex_payload_reg_set_default(struct i40e_hw *hw);
void i40e_pf_disable_rss(struct i40e_pf *pf);
-int i40e_pf_calc_configured_queues_num(struct i40e_pf *pf);
+uint16_t i40e_pf_calc_configured_queues_num(struct i40e_pf *pf);
int i40e_pf_reset_rss_reta(struct i40e_pf *pf);
int i40e_pf_reset_rss_key(struct i40e_pf *pf);
int i40e_pf_config_rss(struct i40e_pf *pf);
@@ -1517,8 +1517,8 @@ i40e_init_adminq_parameter(struct i40e_hw *hw)
hw->aq.asq_buf_size = I40E_AQ_BUF_SZ;
}
-static inline int
-i40e_align_floor(int n)
+static inline uint32_t
+i40e_align_floor(uint32_t n)
{
if (n == 0)
return 0;
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index f20b40e7d0..5756ebf255 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -949,7 +949,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
struct i40e_pf *pf;
struct i40e_hw *hw;
uint16_t i;
- int max_queue;
+ uint16_t max_queue;
hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (!rss_act->queue_num ||
@@ -971,7 +971,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
for (i = 0; i < rss_act->queue_num; i++) {
- if ((int)rss_act->queue[i] >= max_queue)
+ if (rss_act->queue[i] >= max_queue)
break;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 07/27] net/i40e: use proper flex len define
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (5 preceding siblings ...)
2026-02-17 12:13 ` [PATCH v5 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 17:10 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 08/27] net/i40e: remove global pattern variable Anatoly Burakov
` (20 subsequent siblings)
27 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev, Bruce Richardson
For FDIR, we have byte arrays that are supposed to be limited to whatever
the HW supports in terms of flex descriptor matching. However, in the
structure definition, spec and mask bytes are using different array length
defines, and the only reason why this works is because they evaluate to the
same value.
Use the i40e-specific definition instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index ca6638b32c..d57c53f661 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -631,7 +631,7 @@ struct i40e_fdir_flex_pit {
/* A structure used to contain extend input of flow */
struct i40e_fdir_flow_ext {
uint16_t vlan_tci;
- uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
+ uint8_t flexbytes[I40E_FDIR_MAX_FLEX_LEN];
/* It is filled by the flexible payload to match. */
uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN];
uint8_t raw_id;
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 08/27] net/i40e: remove global pattern variable
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (6 preceding siblings ...)
2026-02-17 12:13 ` [PATCH v5 07/27] net/i40e: use proper flex len define Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 17:15 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
` (19 subsequent siblings)
27 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev, Bruce Richardson
When parsing flow patterns, current code cleans up the pattern list by
removing void flow items, and copies the patterns into an array. That
array, when dealing with under 32 flow items, is allocated on the stack,
but when the pattern is big enough, a new list is dynamically allocated.
This allocated list is a global variable, and is allocated using
rte_zmalloc call which seems like overkill for this use case.
Remove the global variable, and replace the split behavior with
unconditional allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 29 +++++++++--------------------
1 file changed, 9 insertions(+), 20 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index e611de0c06..c5bb787f28 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -145,9 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-/* internal pattern w/o VOID items */
-struct rte_flow_item g_items[32];
-
/* Pattern matched ethertype filter */
static enum rte_flow_item_type pattern_ethertype[] = {
RTE_FLOW_ITEM_TYPE_ETH,
@@ -3837,19 +3834,13 @@ i40e_flow_check(struct rte_eth_dev *dev,
i++;
}
item_num++;
-
- if (item_num <= ARRAY_SIZE(g_items)) {
- items = g_items;
- } else {
- items = rte_zmalloc("i40e_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
- if (!items) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL,
- "No memory for PMD internal items.");
- return -ENOMEM;
- }
+ items = calloc(item_num, sizeof(struct rte_flow_item));
+ if (items == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL,
+ "No memory for PMD internal items.");
+ return -ENOMEM;
}
i40e_pattern_skip_void_item(items, pattern);
@@ -3862,8 +3853,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- if (items != g_items)
- rte_free(items);
+ free(items);
return -rte_errno;
}
@@ -3873,8 +3863,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
- if (items != g_items)
- rte_free(items);
+ free(items);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 09/27] net/i40e: avoid rte malloc in tunnel set
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (7 preceding siblings ...)
2026-02-17 12:13 ` [PATCH v5 08/27] net/i40e: remove global pattern variable Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 17:24 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (18 subsequent siblings)
27 siblings, 1 reply; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when setting tunnel configuration, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as this memory is
not being stored anywhere, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 124 ++++++++++++---------------
1 file changed, 53 insertions(+), 71 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 608a6cff4d..d3404d7720 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8511,38 +8511,27 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
struct i40e_pf_vf *vf = NULL;
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_vsi *vsi;
- struct i40e_aqc_cloud_filters_element_bb *cld_filter;
- struct i40e_aqc_cloud_filters_element_bb *pfilter;
+ struct i40e_aqc_cloud_filters_element_bb cld_filter = {0};
struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *tunnel, *node;
+ struct i40e_tunnel_filter *node;
struct i40e_tunnel_filter check_filter; /* Check if filter exists */
uint32_t teid_le;
bool big_buffer = 0;
- cld_filter = rte_zmalloc("tunnel_filter",
- sizeof(struct i40e_aqc_add_rm_cloud_filt_elem_ext),
- 0);
-
- if (cld_filter == NULL) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- return -ENOMEM;
- }
- pfilter = cld_filter;
-
rte_ether_addr_copy(&tunnel_filter->outer_mac,
- (struct rte_ether_addr *)&pfilter->element.outer_mac);
+ (struct rte_ether_addr *)&cld_filter.element.outer_mac);
rte_ether_addr_copy(&tunnel_filter->inner_mac,
- (struct rte_ether_addr *)&pfilter->element.inner_mac);
+ (struct rte_ether_addr *)&cld_filter.element.inner_mac);
- pfilter->element.inner_vlan =
+ cld_filter.element.inner_vlan =
rte_cpu_to_le_16(tunnel_filter->inner_vlan);
if (tunnel_filter->ip_type == I40E_TUNNEL_IPTYPE_IPV4) {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v4.data,
&ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
+ sizeof(cld_filter.element.ipaddr.v4.data));
} else {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
for (i = 0; i < 4; i++) {
@@ -8550,9 +8539,9 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
rte_cpu_to_le_32(rte_be_to_cpu_32(
tunnel_filter->ip_addr.ipv6_addr[i]));
}
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v6.data,
&convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
+ sizeof(cld_filter.element.ipaddr.v6.data));
}
/* check tunneled type */
@@ -8573,11 +8562,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x40;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOUDP;
@@ -8589,11 +8578,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x0;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOGRE;
@@ -8605,11 +8594,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8620,11 +8609,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8641,8 +8630,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
* Big Buffer should be set, see changes in
* i40e_aq_add_cloud_filters
*/
- pfilter->general_fields[0] = tunnel_filter->inner_vlan;
- pfilter->general_fields[1] = tunnel_filter->outer_vlan;
+ cld_filter.general_fields[0] = tunnel_filter->inner_vlan;
+ cld_filter.general_fields[1] = tunnel_filter->outer_vlan;
big_buffer = 1;
break;
case I40E_CLOUD_TYPE_UDP:
@@ -8657,20 +8646,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->sport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
} else {
@@ -8682,20 +8671,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->dport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
}
@@ -8704,48 +8693,46 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
default:
/* Other tunnel types is not supported. */
PMD_DRV_LOG(ERR, "tunnel type is not supported.");
- rte_free(cld_filter);
return -EINVAL;
}
if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoUDP)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoGRE)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPC)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPU)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_QINQ)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_SCTP) {
if (tunnel_filter->l4_port_type == I40E_L4_PORT_TYPE_SRC)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
} else {
val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
- &pfilter->element.flags);
+ &cld_filter.element.flags);
if (val < 0) {
- rte_free(cld_filter);
return -EINVAL;
}
}
- pfilter->element.flags |= rte_cpu_to_le_16(
- I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE |
- ip_type | (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
- pfilter->element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->element.queue_number =
+ cld_filter.element.flags |=
+ rte_cpu_to_le_16(I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+ (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
+ cld_filter.element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
+ cld_filter.element.queue_number =
rte_cpu_to_le_16(tunnel_filter->queue_id);
if (!tunnel_filter->is_to_vf)
@@ -8753,7 +8740,6 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
else {
if (tunnel_filter->vf_id >= pf->vf_num) {
PMD_DRV_LOG(ERR, "Invalid argument.");
- rte_free(cld_filter);
return -EINVAL;
}
vf = &pf->vfs[tunnel_filter->vf_id];
@@ -8762,38 +8748,36 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
/* Check if there is the filter in SW list */
memset(&check_filter, 0, sizeof(check_filter));
- i40e_tunnel_filter_convert(cld_filter, &check_filter);
+ i40e_tunnel_filter_convert(&cld_filter, &check_filter);
check_filter.is_to_vf = tunnel_filter->is_to_vf;
check_filter.vf_id = tunnel_filter->vf_id;
node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &check_filter.input);
if (add && node) {
PMD_DRV_LOG(ERR, "Conflict with existing tunnel rules!");
- rte_free(cld_filter);
return -EINVAL;
}
if (!add && !node) {
PMD_DRV_LOG(ERR, "There's no corresponding tunnel filter!");
- rte_free(cld_filter);
return -EINVAL;
}
if (add) {
+ struct i40e_tunnel_filter *tunnel;
+
if (big_buffer)
ret = i40e_aq_add_cloud_filters_bb(hw,
- vsi->seid, cld_filter, 1);
+ vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_add_cloud_filters(hw,
- vsi->seid, &cld_filter->element, 1);
+ vsi->seid, &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to add a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
tunnel = rte_zmalloc("tunnel_filter", sizeof(*tunnel), 0);
if (tunnel == NULL) {
PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- rte_free(cld_filter);
return -ENOMEM;
}
@@ -8804,19 +8788,17 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
} else {
if (big_buffer)
ret = i40e_aq_rem_cloud_filters_bb(
- hw, vsi->seid, cld_filter, 1);
+ hw, vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter->element, 1);
+ &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to delete a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
ret = i40e_sw_tunnel_filter_del(pf, &node->input);
}
- rte_free(cld_filter);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 10/27] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (8 preceding siblings ...)
2026-02-17 12:13 ` [PATCH v5 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
@ 2026-02-17 12:13 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
` (17 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:13 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index d3404d7720..23a8f625ef 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4630,7 +4630,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -4649,7 +4649,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
pf->adapter->rss_reta_updated = 1;
out:
- rte_free(lut);
+ free(lut);
return ret;
}
@@ -4673,7 +4673,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -4690,7 +4690,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (9 preceding siblings ...)
2026-02-17 12:13 ` [PATCH v5 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
` (16 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding, removing, or configuring MAC and VLAN filters,
we are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 38 +++++++++++++--------------
drivers/net/intel/i40e/rte_pmd_i40e.c | 16 +++++------
2 files changed, 26 insertions(+), 28 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 23a8f625ef..1160acc551 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4165,8 +4165,7 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
i = 0;
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
+ mac_filter = calloc(num, sizeof(*mac_filter));
if (mac_filter == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -4206,7 +4205,7 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (ret)
PMD_DRV_LOG(ERR, "i40e vsi add mac fail.");
}
- rte_free(mac_filter);
+ free(mac_filter);
}
if (mask & RTE_ETH_QINQ_STRIP_MASK) {
@@ -6231,8 +6230,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
+ mac_filter = calloc(num, sizeof(*mac_filter));
if (mac_filter == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -6264,7 +6262,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
}
DONE:
- rte_free(mac_filter);
+ free(mac_filter);
return ret;
}
@@ -7154,7 +7152,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0);
+ req_list = calloc(1, ele_buff_size);
if (req_list == NULL) {
PMD_DRV_LOG(ERR, "Fail to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7207,7 +7205,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
} while (num < total);
DONE:
- rte_free(req_list);
+ free(req_list);
return ret;
}
@@ -7230,7 +7228,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0);
+ req_list = calloc(1, ele_buff_size);
if (req_list == NULL) {
PMD_DRV_LOG(ERR, "Fail to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7286,7 +7284,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
} while (num < total);
DONE:
- rte_free(req_list);
+ free(req_list);
return ret;
}
@@ -7455,7 +7453,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
else
num = vsi->mac_num * vsi->vlan_num;
- mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0);
+ mv_f = calloc(num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7484,7 +7482,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
ret = i40e_remove_macvlan_filters(vsi, mv_f, num);
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7510,7 +7508,7 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
return I40E_ERR_PARAM;
}
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
+ mv_f = calloc(mac_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
@@ -7532,7 +7530,7 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
vsi->vlan_num++;
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7561,7 +7559,7 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
return I40E_ERR_PARAM;
}
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
+ mv_f = calloc(mac_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
@@ -7594,7 +7592,7 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
vsi->vlan_num--;
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7626,7 +7624,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
mac_filter->filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7665,7 +7663,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7696,7 +7694,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7725,7 +7723,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index a358f68bc5..fb73fa924f 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -233,7 +233,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -250,18 +250,18 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
@@ -294,7 +294,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
f->mac_info.filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -312,18 +312,18 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 12/27] net/i40e: avoid rte malloc in VF resource queries
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (10 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
` (15 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when responding to VF resource queries, we are dynamically
allocating a temporary buffer with rte_zmalloc followed by an immediate
rte_free. This is not needed as the response is only used temporarily,
so replace it with a stack-allocated structure (the allocation is fixed
in size and pretty small).
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_pf.c | 26 +++++++-------------------
1 file changed, 7 insertions(+), 19 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_pf.c b/drivers/net/intel/i40e/i40e_pf.c
index ebe1deeade..2a5637b0c1 100644
--- a/drivers/net/intel/i40e/i40e_pf.c
+++ b/drivers/net/intel/i40e/i40e_pf.c
@@ -309,9 +309,14 @@ static int
i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
bool b_op)
{
- struct virtchnl_vf_resource *vf_res = NULL;
+ /* only have 1 VSI by default */
+ struct {
+ struct virtchnl_vf_resource vf_res;
+ struct virtchnl_vsi_resource vsi_res;
+ } res = {0};
+ struct virtchnl_vf_resource *vf_res = &res.vf_res;
+ uint32_t len = sizeof(res);
struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
- uint32_t len = 0;
uint64_t default_hena = I40E_RSS_HENA_ALL;
int ret = I40E_SUCCESS;
@@ -322,20 +327,6 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
return ret;
}
- /* only have 1 VSI by default */
- len = sizeof(struct virtchnl_vf_resource) +
- I40E_DEFAULT_VF_VSI_NUM *
- sizeof(struct virtchnl_vsi_resource);
-
- vf_res = rte_zmalloc("i40e_vf_res", len, 0);
- if (vf_res == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate mem");
- ret = I40E_ERR_NO_MEMORY;
- vf_res = NULL;
- len = 0;
- goto send_msg;
- }
-
if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
VIRTCHNL_VF_OFFLOAD_VLAN;
@@ -377,11 +368,8 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
rte_ether_addr_copy(&vf->mac_addr,
(struct rte_ether_addr *)vf_res->vsi_res[0].default_mac_addr);
-send_msg:
i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_GET_VF_RESOURCES,
ret, (uint8_t *)vf_res, len);
- rte_free(vf_res);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 13/27] net/i40e: avoid rte malloc in adminq operations
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (11 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
` (14 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing admin queue messages, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as the message
buffer is only used temporarily within the function scope, so replace it
with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 1160acc551..ac465c90a4 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6896,7 +6896,7 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
int ret;
info.buf_len = I40E_AQ_BUF_SZ;
- info.msg_buf = rte_zmalloc("msg_buffer", info.buf_len, 0);
+ info.msg_buf = calloc(1, info.buf_len);
if (!info.msg_buf) {
PMD_DRV_LOG(ERR, "Failed to allocate mem");
return;
@@ -6936,7 +6936,7 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
break;
}
}
- rte_free(info.msg_buf);
+ free(info.msg_buf);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 14/27] net/i40e: avoid rte malloc in DDP package handling
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (12 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
` (13 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing Dynamic Driver Profile (DDP) packages and
checking profile information, we are using rte_zmalloc followed by
immediate rte_free. This is not needed as these buffers are only used
temporarily, so replace it with stack-allocated structures or regular
malloc/free where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/rte_pmd_i40e.c | 43 +++++++++------------------
1 file changed, 14 insertions(+), 29 deletions(-)
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index fb73fa924f..a2e24b5ea2 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -1569,9 +1569,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
PMD_DRV_LOG(INFO, "Read-only profile.");
return 0;
}
- buff = rte_zmalloc("pinfo_list",
- (I40E_PROFILE_INFO_SIZE * I40E_MAX_PROFILE_NUM + 4),
- 0);
+ buff = calloc(I40E_MAX_PROFILE_NUM + 4, I40E_PROFILE_INFO_SIZE);
if (!buff) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return -1;
@@ -1583,7 +1581,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
0, NULL);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get profile info list.");
- rte_free(buff);
+ free(buff);
return -1;
}
p_list = (struct rte_pmd_i40e_profile_list *)buff;
@@ -1591,20 +1589,20 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
p = &p_list->p_info[i];
if (pinfo->track_id == p->track_id) {
PMD_DRV_LOG(INFO, "Profile exists.");
- rte_free(buff);
+ free(buff);
return 1;
}
}
/* profile with group id 0xff is compatible with any other profile */
if ((pinfo->track_id & group_mask) == group_mask) {
- rte_free(buff);
+ free(buff);
return 0;
}
for (i = 0; i < p_list->p_count; i++) {
p = &p_list->p_info[i];
if ((p->track_id & group_mask) == 0) {
PMD_DRV_LOG(INFO, "Profile of the group 0 exists.");
- rte_free(buff);
+ free(buff);
return 2;
}
}
@@ -1615,12 +1613,12 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
if ((pinfo->track_id & group_mask) !=
(p->track_id & group_mask)) {
PMD_DRV_LOG(INFO, "Profile of different group exists.");
- rte_free(buff);
+ free(buff);
return 3;
}
}
- rte_free(buff);
+ free(buff);
return 0;
}
@@ -1636,7 +1634,10 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
struct i40e_generic_seg_header *profile_seg_hdr;
struct i40e_generic_seg_header *metadata_seg_hdr;
uint32_t track_id;
- uint8_t *profile_info_sec;
+ struct {
+ struct i40e_profile_section_header sec;
+ struct i40e_profile_info info;
+ } profile_info_sec = {0};
int is_exist;
enum i40e_status_code status = I40E_SUCCESS;
static const uint32_t type_mask = 0xff000000;
@@ -1701,26 +1702,15 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
return -EINVAL;
}
- profile_info_sec = rte_zmalloc(
- "i40e_profile_info",
- sizeof(struct i40e_profile_section_header) +
- sizeof(struct i40e_profile_info),
- 0);
- if (!profile_info_sec) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- return -EINVAL;
- }
-
/* Check if the profile already loaded */
i40e_generate_profile_info_sec(
((struct i40e_profile_segment *)profile_seg_hdr)->name,
&((struct i40e_profile_segment *)profile_seg_hdr)->version,
- track_id, profile_info_sec,
+ track_id, (uint8_t* )&profile_info_sec,
op == RTE_PMD_I40E_PKG_OP_WR_ADD);
- is_exist = i40e_check_profile_info(port, profile_info_sec);
+ is_exist = i40e_check_profile_info(port, (uint8_t* )&profile_info_sec);
if (is_exist < 0) {
PMD_DRV_LOG(ERR, "Failed to check profile.");
- rte_free(profile_info_sec);
return -EINVAL;
}
@@ -1733,13 +1723,11 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
else if (is_exist == 3)
PMD_DRV_LOG(ERR, "Profile of different group already exists");
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return -EEXIST;
}
} else if (op == RTE_PMD_I40E_PKG_OP_WR_DEL) {
if (is_exist != 1) {
PMD_DRV_LOG(ERR, "Profile does not exist.");
- rte_free(profile_info_sec);
return -EACCES;
}
}
@@ -1751,7 +1739,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
track_id);
if (status) {
PMD_DRV_LOG(ERR, "Failed to write profile for delete.");
- rte_free(profile_info_sec);
return status;
}
} else {
@@ -1764,14 +1751,13 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
PMD_DRV_LOG(ERR, "Failed to write profile for add.");
else
PMD_DRV_LOG(ERR, "Failed to write profile.");
- rte_free(profile_info_sec);
return status;
}
}
if (track_id && (op != RTE_PMD_I40E_PKG_OP_WR_ONLY)) {
/* Modify loaded profiles info list */
- status = i40e_add_rm_profile_info(hw, profile_info_sec);
+ status = i40e_add_rm_profile_info(hw, (uint8_t* )&profile_info_sec);
if (status) {
if (op == RTE_PMD_I40E_PKG_OP_WR_ADD)
PMD_DRV_LOG(ERR, "Failed to add profile to info list.");
@@ -1784,7 +1770,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
op == RTE_PMD_I40E_PKG_OP_WR_DEL)
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return status;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 15/27] net/i40e: avoid rte malloc in DDP ptype handling
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (13 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
` (12 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating customized protocol and packet type information
via DDP packages, we are using rte_zmalloc followed by immediate rte_free.
This is not needed as these buffers are only used temporarily within
the function scope, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 25 ++++++++++++-------------
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index ac465c90a4..8a56f05eeb 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -11788,7 +11788,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = pctype_num * sizeof(struct rte_pmd_i40e_proto_info);
- pctype = rte_zmalloc("new_pctype", buff_size, 0);
+ pctype = calloc(pctype_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!pctype) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return -1;
@@ -11799,7 +11799,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PCTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get pctype list");
- rte_free(pctype);
+ free(pctype);
return -1;
}
@@ -11880,7 +11880,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
}
- rte_free(pctype);
+ free(pctype);
return 0;
}
@@ -11926,7 +11926,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_info);
- ptype = rte_zmalloc("new_ptype", buff_size, 0);
+ ptype = calloc(ptype_num, sizeof(struct rte_pmd_i40e_ptype_info));
if (!ptype) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return -1;
@@ -11938,15 +11938,14 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get ptype list");
- rte_free(ptype);
+ free(ptype);
return ret;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_mapping);
- ptype_mapping = rte_zmalloc("ptype_mapping", buff_size, 0);
+ ptype_mapping = calloc(ptype_num, sizeof(struct rte_pmd_i40e_ptype_mapping));
if (!ptype_mapping) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
- rte_free(ptype);
+ free(ptype);
return -1;
}
@@ -12084,8 +12083,8 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(ERR, "Failed to update ptype mapping table.");
- rte_free(ptype_mapping);
- rte_free(ptype);
+ free(ptype_mapping);
+ free(ptype);
return ret;
}
@@ -12120,7 +12119,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = proto_num * sizeof(struct rte_pmd_i40e_proto_info);
- proto = rte_zmalloc("new_proto", buff_size, 0);
+ proto = calloc(proto_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!proto) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return;
@@ -12132,7 +12131,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PROTOCOL_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get protocol list");
- rte_free(proto);
+ free(proto);
return;
}
@@ -12170,7 +12169,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(INFO, "No ptype is updated.");
- rte_free(proto);
+ free(proto);
}
/* Create a QinQ cloud filter
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 16/27] net/iavf: remove remnants of pipeline mode
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (14 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
` (11 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
When pipeline mode was removed, some of the things used by pipelines were
left in the code. Remove them as they are unused.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
drivers/net/intel/iavf/iavf_hash.c | 1 -
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
5 files changed, 19 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c
index 0ef6e0d04a..9eae874800 100644
--- a/drivers/net/intel/iavf/iavf_fdir.c
+++ b/drivers/net/intel/iavf/iavf_fdir.c
@@ -1632,7 +1632,6 @@ static struct iavf_flow_parser iavf_fdir_parser = {
.array = iavf_fdir_pattern,
.array_len = RTE_DIM(iavf_fdir_pattern),
.parse_pattern_action = iavf_fdir_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fdir_engine_register)
diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c
index cf1030320f..bfb34695de 100644
--- a/drivers/net/intel/iavf/iavf_fsub.c
+++ b/drivers/net/intel/iavf/iavf_fsub.c
@@ -814,7 +814,6 @@ iavf_flow_parser iavf_fsub_parser = {
.array = iavf_fsub_pattern_list,
.array_len = RTE_DIM(iavf_fsub_pattern_list),
.parse_pattern_action = iavf_fsub_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fsub_engine_init)
diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h
index b11bb4cf2b..b97cf8b7ff 100644
--- a/drivers/net/intel/iavf/iavf_generic_flow.h
+++ b/drivers/net/intel/iavf/iavf_generic_flow.h
@@ -485,20 +485,6 @@ enum iavf_flow_engine_type {
IAVF_FLOW_ENGINE_MAX,
};
-/**
- * classification stages.
- * for non-pipeline mode, we have two classification stages: Distributor/RSS
- * for pipeline-mode we have three classification stages:
- * Permission/Distributor/RSS
- */
-enum iavf_flow_classification_stage {
- IAVF_FLOW_STAGE_NONE = 0,
- IAVF_FLOW_STAGE_IPSEC_CRYPTO,
- IAVF_FLOW_STAGE_RSS,
- IAVF_FLOW_STAGE_DISTRIBUTOR,
- IAVF_FLOW_STAGE_MAX,
-};
-
/* Struct to store engine created. */
struct iavf_flow_engine {
TAILQ_ENTRY(iavf_flow_engine) node;
@@ -527,7 +513,6 @@ struct iavf_flow_parser {
struct iavf_pattern_match_item *array;
uint32_t array_len;
parse_pattern_action_t parse_pattern_action;
- enum iavf_flow_classification_stage stage;
};
/* Struct to store parser created. */
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index 1725c2b2b9..a40fed7542 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -691,7 +691,6 @@ static struct iavf_flow_parser iavf_hash_parser = {
.array = iavf_hash_pattern_list,
.array_len = RTE_DIM(iavf_hash_pattern_list),
.parse_pattern_action = iavf_hash_parse_pattern_action,
- .stage = IAVF_FLOW_STAGE_RSS,
};
int
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 6d41b1744e..66eaea8715 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -1983,7 +1983,6 @@ static struct iavf_flow_parser iavf_ipsec_flow_parser = {
.array = iavf_ipsec_flow_pattern,
.array_len = RTE_DIM(iavf_ipsec_flow_pattern),
.parse_pattern_action = iavf_ipsec_flow_parse,
- .stage = IAVF_FLOW_STAGE_IPSEC_CRYPTO,
};
RTE_INIT(iavf_ipsec_flow_engine_register)
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (15 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 18/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
` (10 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when calling down into the VF mailbox, IPsec code will use
dynamic memory allocation (rte_malloc one at that!) to allocate VF message
structures which are ~40 bytes in size, and then immediately frees them.
This is wasteful and unnecessary, so use stack allocation instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 157 +++++++--------------
1 file changed, 51 insertions(+), 106 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 66eaea8715..cd85d91850 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -458,36 +458,24 @@ static uint32_t
iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
struct rte_security_session_conf *conf)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- struct virtchnl_ipsec_sa_cfg *sa_cfg;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg sa_cfg;
+ } sa_req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp;
+ } sa_resp = {0};
+ struct inline_ipsec_msg *request = &sa_req.msg, *response = &sa_resp.msg;
+ struct virtchnl_ipsec_sa_cfg *sa_cfg = &sa_req.sa_cfg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg);
-
- request = rte_malloc("iavf-sad-add-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg_resp);
- response = rte_malloc("iavf-sad-add-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
/* set SA configuration params */
- sa_cfg = (struct virtchnl_ipsec_sa_cfg *)(request + 1);
-
sa_cfg->spi = conf->ipsec.spi;
sa_cfg->virtchnl_protocol_type = VIRTCHNL_PROTO_ESP;
sa_cfg->virtchnl_direction =
@@ -541,10 +529,10 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sa_req),
+ (uint8_t *)response, sizeof(sa_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -552,9 +540,6 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
rc = -EFAULT;
else
rc = response->ipsec_data.sa_cfg_resp->sa_handle;
-update_cleanup:
- rte_free(response);
- rte_free(request);
return rc;
}
@@ -722,19 +707,18 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
bool is_udp,
uint16_t udp_port)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg sp_cfg;
+ } sp_req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg_resp sp_cfg_resp;
+ } sp_resp = {0};
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg);
- request = rte_malloc("iavf-inbound-security-policy-add-request",
- request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -768,21 +752,12 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
request->ipsec_data.sp_cfg->is_udp = is_udp;
request->ipsec_data.sp_cfg->udp_port = htons(udp_port);
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg_resp);
- response = rte_malloc("iavf-inbound-security-policy-add-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -791,10 +766,6 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.sp_cfg_resp->rule_id;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -802,26 +773,17 @@ static uint32_t
iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_update sa_update;
+ } sp_req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp ipsec_resp;
+ } sp_resp = {0};
+ struct inline_ipsec_msg *request = &sp_req.msg, *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_update);
- request = rte_malloc("iavf-sa-update-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-update-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_UPDATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -833,10 +795,10 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -845,10 +807,6 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.ipsec_resp->resp;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -899,26 +857,17 @@ int
iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
uint8_t is_v4, uint32_t flow_id)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_destroy sp_destroy;
+ } sp_req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } sp_resp = {0};
+ struct inline_ipsec_msg *request = &sp_req.msg, *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_destroy);
- request = rte_malloc("iavf-sp-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sp-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -931,21 +880,17 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
response->req_id != request->req_id)
rc = -EFAULT;
else
- return response->ipsec_data.ipsec_status->status;
-
-update_cleanup:
- rte_free(request);
- rte_free(response);
+ rc = response->ipsec_data.ipsec_status->status;
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 18/27] net/iavf: decouple hash uninit from parser uninit
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (16 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
` (9 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, parser deinitialization will trigger removal of current RSS
configuration. This should not be done as part of parser deinitialization,
but should rather be a separate step in dev close flow.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++++
drivers/net/intel/iavf/iavf_hash.c | 13 +++++++++----
drivers/net/intel/iavf/iavf_hash.h | 13 +++++++++++++
3 files changed, 26 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/intel/iavf/iavf_hash.h
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 802e095174..70eb7e7ec5 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -35,6 +35,7 @@
#include "iavf_generic_flow.h"
#include "rte_pmd_iavf.h"
#include "iavf_ipsec_crypto.h"
+#include "iavf_hash.h"
/* devargs */
#define IAVF_PROTO_XTR_ARG "proto_xtr"
@@ -2972,6 +2973,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
/* free iAVF security device context all related resources */
iavf_security_ctx_destroy(adapter);
+ /* remove RSS configuration */
+ iavf_hash_uninit(adapter);
+
iavf_flow_flush(dev, NULL);
iavf_flow_uninit(adapter);
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index a40fed7542..d864998402 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -22,6 +22,7 @@
#include "iavf_log.h"
#include "iavf.h"
#include "iavf_generic_flow.h"
+#include "iavf_hash.h"
#define IAVF_PHINT_NONE 0
#define IAVF_PHINT_GTPU BIT_ULL(0)
@@ -77,7 +78,7 @@ static int
iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
struct rte_flow_error *error);
static void
-iavf_hash_uninit(struct iavf_adapter *ad);
+iavf_hash_uninit_parser(struct iavf_adapter *ad);
static void
iavf_hash_free(struct rte_flow *flow);
static int
@@ -680,7 +681,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
.init = iavf_hash_init,
.create = iavf_hash_create,
.destroy = iavf_hash_destroy,
- .uninit = iavf_hash_uninit,
+ .uninit = iavf_hash_uninit_parser,
.free = iavf_hash_free,
.type = IAVF_FLOW_ENGINE_HASH,
};
@@ -1641,6 +1642,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
}
static void
+iavf_hash_uninit_parser(struct iavf_adapter *ad)
+{
+ iavf_unregister_parser(&iavf_hash_parser, ad);
+}
+
+void
iavf_hash_uninit(struct iavf_adapter *ad)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
@@ -1658,8 +1665,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
PMD_DRV_LOG(ERR, "fail to delete default RSS");
-
- iavf_unregister_parser(&iavf_hash_parser, ad);
}
static void
diff --git a/drivers/net/intel/iavf/iavf_hash.h b/drivers/net/intel/iavf/iavf_hash.h
new file mode 100644
index 0000000000..2348f32673
--- /dev/null
+++ b/drivers/net/intel/iavf/iavf_hash.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Intel Corporation
+ */
+
+#ifndef _IAVF_HASH_H_
+#define _IAVF_HASH_H_
+
+#include "iavf.h"
+
+void
+iavf_hash_uninit(struct iavf_adapter *ad);
+
+#endif /* _IAVF_HASH_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 19/27] net/iavf: avoid rte malloc in RSS configuration
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (17 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 18/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
` (8 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring RSS (redirection table, lookup table, and
hash key), we are using rte_zmalloc followed by an immediate rte_free.
This is not needed as this memory is not being stored anywhere, so
replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++--
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 70eb7e7ec5..d3fa47fd5e 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1554,7 +1554,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1574,7 +1574,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = iavf_configure_rss_lut(adapter);
if (ret) /* revert back */
rte_memcpy(vf->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 9ad39300c6..55986ef909 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1159,7 +1159,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
- rss_lut = rte_zmalloc("rss_lut", len, 0);
+ rss_lut = calloc(1, len);
if (!rss_lut)
return -ENOMEM;
@@ -1178,7 +1178,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_LUT");
- rte_free(rss_lut);
+ free(rss_lut);
return err;
}
@@ -1191,7 +1191,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
- rss_key = rte_zmalloc("rss_key", len, 0);
+ rss_key = calloc(1, len);
if (!rss_key)
return -ENOMEM;
@@ -1210,7 +1210,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_KEY");
- rte_free(rss_key);
+ free(rss_key);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (18 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
` (7 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This is not needed as this
memory is not being stored anywhere, so replace it with regular
malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 55986ef909..19dce17612 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1402,7 +1402,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
}
}
- list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
+ list = calloc(1, len);
if (!list) {
PMD_DRV_LOG(ERR, "fail to allocate memory");
return;
@@ -1434,7 +1434,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
+ free(list);
begin = next_begin;
} while (begin < IAVF_NUM_MACADDR_MAX);
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 21/27] net/iavf: avoid rte malloc in IPsec operations
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (19 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
` (6 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when performing IPsec security association operations and
retrieving device capabilities, we are using rte_malloc followed by
immediate rte_free. This is not needed as these structures are only
used temporarily, so replace it with stack-allocated structures for
small fixed-size messages and regular malloc/free for variable-sized
buffers.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 58 ++++++++--------------
1 file changed, 21 insertions(+), 37 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index cd85d91850..3274ce503c 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -3,6 +3,7 @@
*/
#include <stdalign.h>
+#include <stdlib.h>
#include <rte_cryptodev.h>
#include <rte_ethdev.h>
@@ -899,29 +900,18 @@ static uint32_t
iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_destroy sa_destroy;
+ } req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } resp;
+ struct inline_ipsec_msg *request = &req.msg, *response = &resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_destroy);
-
- request = rte_malloc("iavf-sa-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
-
- response = rte_malloc("iavf-sa-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -939,10 +929,10 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(req),
+ (uint8_t *)response, sizeof(resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -957,10 +947,6 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
response->ipsec_data.ipsec_status->status)
rc = -EFAULT;
-update_cleanup:
- rte_free(response);
- rte_free(request);
-
return rc;
}
@@ -1113,7 +1099,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-capability-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1121,8 +1107,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-capability-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1148,8 +1133,8 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
memcpy(capability, response->ipsec_data.ipsec_cap, sizeof(*capability));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
@@ -1538,7 +1523,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-status-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1546,8 +1531,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-status-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1573,8 +1557,8 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
memcpy(status, response->ipsec_data.ipsec_status, sizeof(*status));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 22/27] net/iavf: avoid rte malloc in queue operations
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (20 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
` (5 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when enabling, disabling, or switching queues, we are using
rte_malloc followed by an immediate rte_free. This is not needed as these
structures are not being stored anywhere, so replace them with stack
allocation or malloc/free where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 64 +++++++++++------------------
1 file changed, 25 insertions(+), 39 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 19dce17612..a947ff92cf 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1020,19 +1020,15 @@ int
iavf_enable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1048,7 +1044,7 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_ENABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1056,7 +1052,6 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_ENABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1064,19 +1059,15 @@ int
iavf_disable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1092,7 +1083,7 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1100,7 +1091,6 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1109,17 +1099,14 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
bool rx, bool on)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = 1;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1138,7 +1125,7 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
else
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1146,7 +1133,6 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
PMD_DRV_LOG(ERR, "Failed to execute command of %s",
on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1229,7 +1215,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
size = sizeof(*vc_config) +
sizeof(vc_config->qpair[0]) * num_queue_pairs;
- vc_config = rte_zmalloc("cfg_queue", size, 0);
+ vc_config = calloc(1, size);
if (!vc_config)
return -ENOMEM;
@@ -1292,7 +1278,7 @@ iavf_configure_queues(struct iavf_adapter *adapter,
PMD_DRV_LOG(ERR, "Failed to execute command of"
" VIRTCHNL_OP_CONFIG_VSI_QUEUES");
- rte_free(vc_config);
+ free(vc_config);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 23/27] net/iavf: avoid rte malloc in irq map config
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (21 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (4 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring IRQ maps, we are using rte_zmalloc followed
by an immediate rte_free. This is not needed as this memory is not being
stored anywhere, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index a947ff92cf..498b8816fc 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1294,7 +1294,7 @@ iavf_config_irq_map(struct iavf_adapter *adapter)
len = sizeof(struct virtchnl_irq_map_info) +
sizeof(struct virtchnl_vector_map) * vf->nb_msix;
- map_info = rte_zmalloc("map_info", len, 0);
+ map_info = calloc(1, len);
if (!map_info)
return -ENOMEM;
@@ -1318,7 +1318,7 @@ iavf_config_irq_map(struct iavf_adapter *adapter)
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
- rte_free(map_info);
+ free(map_info);
return err;
}
@@ -1336,7 +1336,7 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
len = sizeof(struct virtchnl_queue_vector_maps) +
sizeof(struct virtchnl_queue_vector) * (num - 1);
- map_info = rte_zmalloc("map_info", len, 0);
+ map_info = calloc(1, len);
if (!map_info)
return -ENOMEM;
@@ -1359,7 +1359,7 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
- rte_free(map_info);
+ free(map_info);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 24/27] net/ice: avoid rte malloc in RSS RETA operations
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (22 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
` (3 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This is not
needed as this memory is not being stored anywhere, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
drivers/net/intel/ice/ice_ethdev.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 81da5a4656..037382b336 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -1338,7 +1338,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1358,7 +1358,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = ice_dcf_configure_rss_lut(hw);
if (ret) /* revert back */
rte_memcpy(hw->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_ethdev.c b/drivers/net/intel/ice/ice_ethdev.c
index ade13600de..fbd7c0f2f2 100644
--- a/drivers/net/intel/ice/ice_ethdev.c
+++ b/drivers/net/intel/ice/ice_ethdev.c
@@ -5583,7 +5583,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
/* It MUST use the current LUT size to get the RSS lookup table,
* otherwise if will fail with -100 error code.
*/
- lut = rte_zmalloc(NULL, RTE_MAX(reta_size, lut_size), 0);
+ lut = calloc(1, RTE_MAX(reta_size, lut_size));
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -5607,7 +5607,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
@@ -5632,7 +5632,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc(NULL, reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -5650,7 +5650,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
}
out:
- rte_free(lut);
+ free(lut);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 25/27] net/ice: avoid rte malloc in MAC address operations
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (23 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
` (2 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This is not needed as this
memory is not being stored anywhere, so replace it with regular
calloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 037382b336..d2a7a2847b 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -936,7 +936,7 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
len = sizeof(struct virtchnl_ether_addr_list);
len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
- list = rte_zmalloc(NULL, len, 0);
+ list = calloc(1, len);
if (!list) {
PMD_DRV_LOG(ERR, "fail to allocate memory");
return -ENOMEM;
@@ -961,7 +961,7 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
+ free(list);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 26/27] net/ice: avoid rte malloc in raw pattern parsing
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (24 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
2026-02-17 13:00 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Burakov, Anatoly
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when parsing raw flow patterns, we are using rte_zmalloc
followed by an immediate rte_free. This is not needed as this memory is
not being stored anywhere, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_fdir_filter.c | 14 +++++++-------
drivers/net/intel/ice/ice_hash.c | 10 +++++-----
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index da22b65a77..5f44b5c818 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -1879,13 +1879,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
pkt_len)
return -rte_errno;
- tmp_spec = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_spec = calloc(1, pkt_len / 2);
if (!tmp_spec)
return -rte_errno;
- tmp_mask = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_mask = calloc(1, pkt_len / 2);
if (!tmp_mask) {
- rte_free(tmp_spec);
+ free(tmp_spec);
return -rte_errno;
}
@@ -1950,13 +1950,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
filter->parser_ena = true;
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
break;
raw_error:
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
return ret_val;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index afdc8f220a..854c6e8dca 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -676,13 +676,13 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
pkt_len = spec_len / 2;
- pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+ pkt_buf = calloc(1, pkt_len);
if (!pkt_buf)
return -ENOMEM;
- msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+ msk_buf = calloc(1, pkt_len);
if (!msk_buf) {
- rte_free(pkt_buf);
+ free(pkt_buf);
return -ENOMEM;
}
@@ -733,8 +733,8 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
rte_memcpy(&meta->raw.prof, &prof, sizeof(prof));
free_mem:
- rte_free(pkt_buf);
- rte_free(msk_buf);
+ free(pkt_buf);
+ free(msk_buf);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v5 27/27] net/ice: avoid rte malloc in flow pattern match
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (25 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
@ 2026-02-17 12:14 ` Anatoly Burakov
2026-02-17 13:00 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Burakov, Anatoly
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-17 12:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when allocating buffers for pattern match items and flow item
storage, we are using rte_zmalloc followed by immediate rte_free. This is
not needed as these buffers are only used temporarily within the function
scope, so replace it with regular calloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_acl_filter.c | 3 ++-
drivers/net/intel/ice/ice_fdir_filter.c | 5 +++--
drivers/net/intel/ice/ice_generic_flow.c | 15 +++++++--------
drivers/net/intel/ice/ice_hash.c | 3 ++-
drivers/net/intel/ice/ice_switch_filter.c | 5 +++--
5 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c
index 38e30a4f62..6754a40044 100644
--- a/drivers/net/intel/ice/ice_acl_filter.c
+++ b/drivers/net/intel/ice/ice_acl_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1009,7 +1010,7 @@ ice_acl_parse(struct ice_adapter *ad,
*meta = filter;
error:
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 5f44b5c818..8cca831fa9 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -3,6 +3,7 @@
*/
#include <stdio.h>
+#include <stdlib.h>
#include <rte_flow.h>
#include <rte_hash.h>
#include <rte_hash_crc.h>
@@ -2504,11 +2505,11 @@ ice_fdir_parse(struct ice_adapter *ad,
rte_free(filter->pkt_buf);
}
- rte_free(item);
+ free(item);
return ret;
error:
rte_free(filter->pkt_buf);
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c
index 4049157eab..3f7a9f4714 100644
--- a/drivers/net/intel/ice/ice_generic_flow.c
+++ b/drivers/net/intel/ice/ice_generic_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -2136,19 +2137,17 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
}
item_num++;
- items = rte_zmalloc("ice_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
+ items = calloc(item_num, sizeof(struct rte_flow_item));
if (!items) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
NULL, "No memory for PMD internal items.");
return NULL;
}
- pattern_match_item = rte_zmalloc("ice_pattern_match_item",
- sizeof(struct ice_pattern_match_item), 0);
+ pattern_match_item = calloc(1, sizeof(struct ice_pattern_match_item));
if (!pattern_match_item) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE,
NULL, "Failed to allocate memory.");
- rte_free(items);
+ free(items);
return NULL;
}
@@ -2167,7 +2166,7 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
pattern_match_item->pattern_list =
array[i].pattern_list;
pattern_match_item->meta = array[i].meta;
- rte_free(items);
+ free(items);
return pattern_match_item;
}
}
@@ -2175,8 +2174,8 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
unsupported:
rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- rte_free(items);
- rte_free(pattern_match_item);
+ free(items);
+ free(pattern_match_item);
return NULL;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index 854c6e8dca..1174c505da 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -1211,7 +1212,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad,
*meta = rss_meta_ptr;
else
rte_free(rss_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c
index 28bc775a2c..b25e5eaad3 100644
--- a/drivers/net/intel/ice/ice_switch_filter.c
+++ b/drivers/net/intel/ice/ice_switch_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1877,14 +1878,14 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
rte_free(sw_meta_ptr);
}
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return 0;
error:
rte_free(list);
rte_free(sw_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return -rte_errno;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* Re: [PATCH v3 10/27] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-16 17:06 ` Bruce Richardson
@ 2026-02-17 12:32 ` Burakov, Anatoly
2026-02-17 12:46 ` Bruce Richardson
0 siblings, 1 reply; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-17 12:32 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
On 2/16/2026 6:06 PM, Bruce Richardson wrote:
> On Wed, Feb 11, 2026 at 01:52:52PM +0000, Anatoly Burakov wrote:
>> Currently, when updating or querying RSS redirection table (RETA), we
>> are using rte_zmalloc followed by an immediate rte_free. This is not
>> needed as this memory is not being stored anywhere, so replace it with
>> regular malloc/free.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>> drivers/net/intel/i40e/i40e_ethdev.c | 8 ++++----
>> 1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
>> index 06430e6319..654b0e5d16 100644
>> --- a/drivers/net/intel/i40e/i40e_ethdev.c
>> +++ b/drivers/net/intel/i40e/i40e_ethdev.c
>> @@ -4630,7 +4630,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
>> return -EINVAL;
>> }
>>
>> - lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
>> + lut = calloc(1, reta_size);
>> if (!lut) {
>> PMD_DRV_LOG(ERR, "No memory can be allocated");
>> return -ENOMEM;
>> @@ -4649,7 +4649,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
>> pf->adapter->rss_reta_updated = 1;
>>
>> out:
>> - rte_free(lut);
>> + free(lut);
>>
>> return ret;
>> }
>
> For i40e do we not have a reasonable max reta size that we could use for a
> local array variable, save allocating and freeing entirely?
>
It's on the order of kilobytes I think so I decided against stack
allocation for this scenario.
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 10/27] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-17 12:32 ` Burakov, Anatoly
@ 2026-02-17 12:46 ` Bruce Richardson
2026-02-17 14:38 ` Stephen Hemminger
0 siblings, 1 reply; 297+ messages in thread
From: Bruce Richardson @ 2026-02-17 12:46 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev
On Tue, Feb 17, 2026 at 01:32:05PM +0100, Burakov, Anatoly wrote:
> On 2/16/2026 6:06 PM, Bruce Richardson wrote:
> > On Wed, Feb 11, 2026 at 01:52:52PM +0000, Anatoly Burakov wrote:
> > > Currently, when updating or querying RSS redirection table (RETA), we
> > > are using rte_zmalloc followed by an immediate rte_free. This is not
> > > needed as this memory is not being stored anywhere, so replace it with
> > > regular malloc/free.
> > >
> > > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > ---
> > > drivers/net/intel/i40e/i40e_ethdev.c | 8 ++++----
> > > 1 file changed, 4 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
> > > index 06430e6319..654b0e5d16 100644
> > > --- a/drivers/net/intel/i40e/i40e_ethdev.c
> > > +++ b/drivers/net/intel/i40e/i40e_ethdev.c
> > > @@ -4630,7 +4630,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
> > > return -EINVAL;
> > > }
> > > - lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
> > > + lut = calloc(1, reta_size);
> > > if (!lut) {
> > > PMD_DRV_LOG(ERR, "No memory can be allocated");
> > > return -ENOMEM;
> > > @@ -4649,7 +4649,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
> > > pf->adapter->rss_reta_updated = 1;
> > > out:
> > > - rte_free(lut);
> > > + free(lut);
> > > return ret;
> > > }
> >
> > For i40e do we not have a reasonable max reta size that we could use for a
> > local array variable, save allocating and freeing entirely?
> >
>
> It's on the order of kilobytes I think so I decided against stack allocation
> for this scenario.
>
If it's only a kilobyte, I would tend to go with stack allocation.
However, it's up to you.
/Bruce
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 01/27] net/ixgbe: remove MAC type check macros
2026-02-16 16:58 ` Bruce Richardson
@ 2026-02-17 12:50 ` Burakov, Anatoly
2026-02-17 12:58 ` Bruce Richardson
0 siblings, 1 reply; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-17 12:50 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Vladimir Medvedkin
On 2/16/2026 5:58 PM, Bruce Richardson wrote:
> On Fri, Feb 13, 2026 at 10:26:12AM +0000, Anatoly Burakov wrote:
>> The macros used were not informative and did not add any value beyond code
>> golf, so remove them and make MAC type checks explicit.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>> drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
>> drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
>> 2 files changed, 17 insertions(+), 15 deletions(-)
>>
>> diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
>> index 5dbd659941..7dc02a472b 100644
>> --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
>> +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
>> @@ -137,18 +137,6 @@
>> #define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
>> #define IXGBE_MAX_L2_TN_FILTER_NUM 128
>>
>> -#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
>> - if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
>> - return -ENOTSUP;\
>> -} while (0)
>> -
>> -#define MAC_TYPE_FILTER_SUP(type) do {\
>> - if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
>> - (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
>> - (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
>> - return -ENOTSUP;\
>> -} while (0)
>> -
>
> Ack for removing the former. For the latter, since the list is longer and
> the code is used twice, I'd be tempted to convert to an inline function
> taking in struct hw and returning type bool. WDYT?
>
I don't want to use a macro/inline function just to save on code, it has
to have some semantic meaning. Do you have any suggestions on what it is
that we'd be checking in these cases?
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 16/27] net/iavf: remove remnants of pipeline mode
2026-02-16 17:16 ` Bruce Richardson
@ 2026-02-17 12:51 ` Burakov, Anatoly
2026-02-17 13:00 ` Bruce Richardson
0 siblings, 1 reply; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-17 12:51 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Vladimir Medvedkin
On 2/16/2026 6:16 PM, Bruce Richardson wrote:
> On Fri, Feb 13, 2026 at 10:26:27AM +0000, Anatoly Burakov wrote:
>> When pipeline mode was removed, some of the things used by pipelines were
>> left in the code. Remove them as they are unused.
>>
>
> This needs a fixes tag to point to the removal.
>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>> drivers/net/intel/iavf/iavf_fdir.c | 1 -
>> drivers/net/intel/iavf/iavf_fsub.c | 1 -
>> drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
>> drivers/net/intel/iavf/iavf_hash.c | 1 -
>> drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
>> 5 files changed, 19 deletions(-)
>>
Well it's technically not a *bug* so I'm not sure if having a Fixes: tag
is worth it here. We're not fixing anything that is broken.
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 01/27] net/ixgbe: remove MAC type check macros
2026-02-17 12:50 ` Burakov, Anatoly
@ 2026-02-17 12:58 ` Bruce Richardson
2026-02-17 14:23 ` Burakov, Anatoly
0 siblings, 1 reply; 297+ messages in thread
From: Bruce Richardson @ 2026-02-17 12:58 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev, Vladimir Medvedkin
On Tue, Feb 17, 2026 at 01:50:36PM +0100, Burakov, Anatoly wrote:
> On 2/16/2026 5:58 PM, Bruce Richardson wrote:
> > On Fri, Feb 13, 2026 at 10:26:12AM +0000, Anatoly Burakov wrote:
> > > The macros used were not informative and did not add any value beyond code
> > > golf, so remove them and make MAC type checks explicit.
> > >
> > > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > ---
> > > drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
> > > drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
> > > 2 files changed, 17 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> > > index 5dbd659941..7dc02a472b 100644
> > > --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> > > +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> > > @@ -137,18 +137,6 @@
> > > #define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
> > > #define IXGBE_MAX_L2_TN_FILTER_NUM 128
> > > -#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
> > > - if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
> > > - return -ENOTSUP;\
> > > -} while (0)
> > > -
> > > -#define MAC_TYPE_FILTER_SUP(type) do {\
> > > - if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
> > > - (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
> > > - (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
> > > - return -ENOTSUP;\
> > > -} while (0)
> > > -
> >
> > Ack for removing the former. For the latter, since the list is longer and
> > the code is used twice, I'd be tempted to convert to an inline function
> > taking in struct hw and returning type bool. WDYT?
> >
>
> I don't want to use a macro/inline function just to save on code, it has to
> have some semantic meaning. Do you have any suggestions on what it is that
> we'd be checking in these cases?
>
From the title of the macro I assumed it was whether mac filters are
supported or not? However, if that's not really the case and this is an
arbitrary set of MAC types for some particular use case, then yes, agree
that it's best to remove the macro completely and inline.
/Bruce
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 16/27] net/iavf: remove remnants of pipeline mode
2026-02-17 12:51 ` Burakov, Anatoly
@ 2026-02-17 13:00 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-17 13:00 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev, Vladimir Medvedkin
On Tue, Feb 17, 2026 at 01:51:54PM +0100, Burakov, Anatoly wrote:
> On 2/16/2026 6:16 PM, Bruce Richardson wrote:
> > On Fri, Feb 13, 2026 at 10:26:27AM +0000, Anatoly Burakov wrote:
> > > When pipeline mode was removed, some of the things used by pipelines were
> > > left in the code. Remove them as they are unused.
> > >
> >
> > This needs a fixes tag to point to the removal.
> >
> > > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > ---
> > > drivers/net/intel/iavf/iavf_fdir.c | 1 -
> > > drivers/net/intel/iavf/iavf_fsub.c | 1 -
> > > drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
> > > drivers/net/intel/iavf/iavf_hash.c | 1 -
> > > drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
> > > 5 files changed, 19 deletions(-)
> > >
>
> Well it's technically not a *bug* so I'm not sure if having a Fixes: tag is
> worth it here. We're not fixing anything that is broken.
>
Well, I'd view it as a bug, but agreed that it has no user visible
consequences so no point in Ccing stable and backporting etc. Therefore
omitting a fixes is ok, I suppose.
/Bruce
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (26 preceding siblings ...)
2026-02-17 12:14 ` [PATCH v5 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
@ 2026-02-17 13:00 ` Burakov, Anatoly
27 siblings, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-17 13:00 UTC (permalink / raw)
To: dev
On 2/17/2026 1:13 PM, Anatoly Burakov wrote:
> This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
>
> IXGBE:
>
> - Remove unnecessary macros and #ifdef's
> - Disentangle unrelated flow API code paths
>
> I40E:
>
> - Get rid of global variables and unnecessary allocations
> - Reduce code duplication around default RSS keys
> - Use more appropriate integer types and definitions
>
> IAVF:
>
> - Remove dead code
> - Remove unnecessary allocations
> - Separate RSS uninit from hash flow parser uninit
>
> ICE:
>
> - Remove unnecessary allocations
>
> This is done in preparation for further rework.
>
> Note that this patchset depends on driver bug fix patchset [1] (already integrated into next-net-intel).
>
> [1] https://patches.dpdk.org/project/dpdk/list/?series=37350
>
> v1 -> v2:
> - Added more cleanups around rte_malloc usage
>
> v2 -> v3:
> - Reworded some commit messages
> - Added a new patch for ICE
> - Rebased on latest bug fix patches
>
> v3 -> v4:
> - Rebased on latest bugfix patchset
>
> v4 -> v5:
> - Adjusted typing for queue size
> - Fixed missing zero initializations for stack allocations
>
Due to Thunderbird mishap missed a bunch of feedback, so v6 will come.
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 01/27] net/ixgbe: remove MAC type check macros
2026-02-17 12:58 ` Bruce Richardson
@ 2026-02-17 14:23 ` Burakov, Anatoly
2026-02-17 15:32 ` Bruce Richardson
0 siblings, 1 reply; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-17 14:23 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Vladimir Medvedkin
On 2/17/2026 1:58 PM, Bruce Richardson wrote:
> On Tue, Feb 17, 2026 at 01:50:36PM +0100, Burakov, Anatoly wrote:
>> On 2/16/2026 5:58 PM, Bruce Richardson wrote:
>>> On Fri, Feb 13, 2026 at 10:26:12AM +0000, Anatoly Burakov wrote:
>>>> The macros used were not informative and did not add any value beyond code
>>>> golf, so remove them and make MAC type checks explicit.
>>>>
>>>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>>> ---
>>>> drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
>>>> drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
>>>> 2 files changed, 17 insertions(+), 15 deletions(-)
>>>>
>>>> diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
>>>> index 5dbd659941..7dc02a472b 100644
>>>> --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
>>>> +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
>>>> @@ -137,18 +137,6 @@
>>>> #define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
>>>> #define IXGBE_MAX_L2_TN_FILTER_NUM 128
>>>> -#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
>>>> - if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
>>>> - return -ENOTSUP;\
>>>> -} while (0)
>>>> -
>>>> -#define MAC_TYPE_FILTER_SUP(type) do {\
>>>> - if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
>>>> - (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
>>>> - (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
>>>> - return -ENOTSUP;\
>>>> -} while (0)
>>>> -
>>>
>>> Ack for removing the former. For the latter, since the list is longer and
>>> the code is used twice, I'd be tempted to convert to an inline function
>>> taking in struct hw and returning type bool. WDYT?
>>>
>>
>> I don't want to use a macro/inline function just to save on code, it has to
>> have some semantic meaning. Do you have any suggestions on what it is that
>> we'd be checking in these cases?
>>
> From the title of the macro I assumed it was whether mac filters are
> supported or not? However, if that's not really the case and this is an
> arbitrary set of MAC types for some particular use case, then yes, agree
> that it's best to remove the macro completely and inline.
>
My reading of the name of the macros are that they are "filtering
supported features by mac type" (MAC_TYPE_FILTER), and there are two
varieties - "supported" and "supported extended" (SUP and SUP_EXT), but
with no actual semantic meaning.
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v3 10/27] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-17 12:46 ` Bruce Richardson
@ 2026-02-17 14:38 ` Stephen Hemminger
0 siblings, 0 replies; 297+ messages in thread
From: Stephen Hemminger @ 2026-02-17 14:38 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Burakov, Anatoly, dev
On Tue, 17 Feb 2026 12:46:44 +0000
Bruce Richardson <bruce.richardson@intel.com> wrote:
> On Tue, Feb 17, 2026 at 01:32:05PM +0100, Burakov, Anatoly wrote:
> > On 2/16/2026 6:06 PM, Bruce Richardson wrote:
> > > On Wed, Feb 11, 2026 at 01:52:52PM +0000, Anatoly Burakov wrote:
> > > > Currently, when updating or querying RSS redirection table (RETA), we
> > > > are using rte_zmalloc followed by an immediate rte_free. This is not
> > > > needed as this memory is not being stored anywhere, so replace it with
> > > > regular malloc/free.
> > > >
> > > > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > > ---
> > > > drivers/net/intel/i40e/i40e_ethdev.c | 8 ++++----
> > > > 1 file changed, 4 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
> > > > index 06430e6319..654b0e5d16 100644
> > > > --- a/drivers/net/intel/i40e/i40e_ethdev.c
> > > > +++ b/drivers/net/intel/i40e/i40e_ethdev.c
> > > > @@ -4630,7 +4630,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
> > > > return -EINVAL;
> > > > }
> > > > - lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
> > > > + lut = calloc(1, reta_size);
> > > > if (!lut) {
> > > > PMD_DRV_LOG(ERR, "No memory can be allocated");
> > > > return -ENOMEM;
> > > > @@ -4649,7 +4649,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
> > > > pf->adapter->rss_reta_updated = 1;
> > > > out:
> > > > - rte_free(lut);
> > > > + free(lut);
> > > > return ret;
> > > > }
> > >
> > > For i40e do we not have a reasonable max reta size that we could use for a
> > > local array variable, save allocating and freeing entirely?
> > >
> >
> > It's on the order of kilobytes I think so I decided against stack allocation
> > for this scenario.
> >
> If it's only a kilobyte, I would tend to go with stack allocation.
> However, it's up to you.
Agree, anything under 4K seems reasonable to be on stack.
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 01/27] net/ixgbe: remove MAC type check macros
2026-02-17 14:23 ` Burakov, Anatoly
@ 2026-02-17 15:32 ` Bruce Richardson
0 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-17 15:32 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev, Vladimir Medvedkin
On Tue, Feb 17, 2026 at 03:23:00PM +0100, Burakov, Anatoly wrote:
> On 2/17/2026 1:58 PM, Bruce Richardson wrote:
> > On Tue, Feb 17, 2026 at 01:50:36PM +0100, Burakov, Anatoly wrote:
> > > On 2/16/2026 5:58 PM, Bruce Richardson wrote:
> > > > On Fri, Feb 13, 2026 at 10:26:12AM +0000, Anatoly Burakov wrote:
> > > > > The macros used were not informative and did not add any value beyond code
> > > > > golf, so remove them and make MAC type checks explicit.
> > > > >
> > > > > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > > > ---
> > > > > drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
> > > > > drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
> > > > > 2 files changed, 17 insertions(+), 15 deletions(-)
> > > > >
> > > > > diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> > > > > index 5dbd659941..7dc02a472b 100644
> > > > > --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> > > > > +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> > > > > @@ -137,18 +137,6 @@
> > > > > #define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
> > > > > #define IXGBE_MAX_L2_TN_FILTER_NUM 128
> > > > > -#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
> > > > > - if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
> > > > > - return -ENOTSUP;\
> > > > > -} while (0)
> > > > > -
> > > > > -#define MAC_TYPE_FILTER_SUP(type) do {\
> > > > > - if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
> > > > > - (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
> > > > > - (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
> > > > > - return -ENOTSUP;\
> > > > > -} while (0)
> > > > > -
> > > >
> > > > Ack for removing the former. For the latter, since the list is longer and
> > > > the code is used twice, I'd be tempted to convert to an inline function
> > > > taking in struct hw and returning type bool. WDYT?
> > > >
> > >
> > > I don't want to use a macro/inline function just to save on code, it has to
> > > have some semantic meaning. Do you have any suggestions on what it is that
> > > we'd be checking in these cases?
> > >
> > From the title of the macro I assumed it was whether mac filters are
> > supported or not? However, if that's not really the case and this is an
> > arbitrary set of MAC types for some particular use case, then yes, agree
> > that it's best to remove the macro completely and inline.
> >
>
> My reading of the name of the macros are that they are "filtering supported
> features by mac type" (MAC_TYPE_FILTER), and there are two varieties -
> "supported" and "supported extended" (SUP and SUP_EXT), but with no actual
> semantic meaning.
>
Ack, then inlining as you suggest seems reasonable.
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v5 03/27] net/ixgbe: split security and ntuple filters
2026-02-17 12:13 ` [PATCH v5 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-17 16:59 ` Medvedkin, Vladimir
0 siblings, 0 replies; 297+ messages in thread
From: Medvedkin, Vladimir @ 2026-02-17 16:59 UTC (permalink / raw)
To: Anatoly Burakov, dev
On 2/17/2026 12:13 PM, Anatoly Burakov wrote:
> These filters are mashed together even though they almost do not share any
> code at all between each other. Separate security filter from ntuple filter
> and parse it separately.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/ixgbe/ixgbe_flow.c | 194 ++++++++++++++++-----------
> 1 file changed, 114 insertions(+), 80 deletions(-)
>
<snip>
> +
> + /*
> + * we get pointer to security session from security action,
> + * which is const. however, we do need to act on the session, so
> + * either we do some kind of pointer based lookup to get session
> + * pointer internally (which quickly gets unwieldy for lots of
> + * flows case), or we simply cast away constness.
> + */
> + session = RTE_CAST_PTR(struct rte_security_session *, security->security_session);
> + return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
nit: I'd recommend to handle error here in the same way it is handled in
this function, i.e. if (ret) {rte_flow_error_set() ... }
apart from this
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> +}
> +
> /* a specific function for ixgbe because the flags is specific */
> static int
> ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
<snip>
--
Regards,
Vladimir
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v5 04/27] net/i40e: get rid of global filter variables
2026-02-17 12:13 ` [PATCH v5 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
@ 2026-02-17 16:59 ` Medvedkin, Vladimir
0 siblings, 0 replies; 297+ messages in thread
From: Medvedkin, Vladimir @ 2026-02-17 16:59 UTC (permalink / raw)
To: Anatoly Burakov, dev, Bruce Richardson
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
On 2/17/2026 12:13 PM, Anatoly Burakov wrote:
> Currently, i40e driver relies on global state to work around the fact that
> `rte_flow_validate()` is being called directly from `rte_flow_create()`,
> and it not being possible to pass state between two functions. Fix that by
> making a small wrapper around validation that will create a dummy context.
>
> Additionally, tunnel filter doesn't appear to be used by anything and so is
> omitted from the structure.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_ethdev.h | 16 ++--
> drivers/net/intel/i40e/i40e_flow.c | 117 ++++++++++++++-------------
> 2 files changed, 68 insertions(+), 65 deletions(-)
>
<snip>
--
Regards,
Vladimir
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v5 05/27] net/i40e: make default RSS key global
2026-02-17 12:13 ` [PATCH v5 05/27] net/i40e: make default RSS key global Anatoly Burakov
@ 2026-02-17 17:06 ` Medvedkin, Vladimir
0 siblings, 0 replies; 297+ messages in thread
From: Medvedkin, Vladimir @ 2026-02-17 17:06 UTC (permalink / raw)
To: Anatoly Burakov, dev, Bruce Richardson
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
On 2/17/2026 12:13 PM, Anatoly Burakov wrote:
> Currently, there are multiple places where we need a default RSS key, but
> in each of those places we define it as a local variable. Make it global
> constant, and adjust all callers to use the global constant. When dealing
> with adminq, we cannot send down the constant because adminq commands do
> not guarantee const-ness, so copy RSS key into a local buffer before
> sending it down to hardware.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_ethdev.c | 22 ++++++++++------------
> drivers/net/intel/i40e/i40e_hash.c | 23 +++++++++++++++++------
> drivers/net/intel/i40e/i40e_hash.h | 3 +++
> 3 files changed, 30 insertions(+), 18 deletions(-)
>
<snip>
--
Regards,
Vladimir
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v5 06/27] net/i40e: use unsigned types for queue comparisons
2026-02-17 12:13 ` [PATCH v5 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-17 17:09 ` Medvedkin, Vladimir
0 siblings, 0 replies; 297+ messages in thread
From: Medvedkin, Vladimir @ 2026-02-17 17:09 UTC (permalink / raw)
To: Anatoly Burakov, dev, Bruce Richardson
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
On 2/17/2026 12:13 PM, Anatoly Burakov wrote:
> Currently, when we compare queue numbers against maximum traffic class
> value of 64, we do not use unsigned values, which results in compiler
> warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
> value. Make it unsigned 16-bit, and adjust callers to use correct types.
> As a consequence, `i40e_align_floor` now returns unsigned value as well -
> this is correct, because nothing about that function implies signed usage
> being a valid use case.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_ethdev.c | 11 ++++++-----
> drivers/net/intel/i40e/i40e_ethdev.h | 8 ++++----
> drivers/net/intel/i40e/i40e_hash.c | 4 ++--
> 3 files changed, 12 insertions(+), 11 deletions(-)
>
<snip>
--
Regards,
Vladimir
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v5 07/27] net/i40e: use proper flex len define
2026-02-17 12:13 ` [PATCH v5 07/27] net/i40e: use proper flex len define Anatoly Burakov
@ 2026-02-17 17:10 ` Medvedkin, Vladimir
0 siblings, 0 replies; 297+ messages in thread
From: Medvedkin, Vladimir @ 2026-02-17 17:10 UTC (permalink / raw)
To: Anatoly Burakov, dev, Bruce Richardson
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
On 2/17/2026 12:13 PM, Anatoly Burakov wrote:
> For FDIR, we have byte arrays that are supposed to be limited to whatever
> the HW supports in terms of flex descriptor matching. However, in the
> structure definition, spec and mask bytes are using different array length
> defines, and the only reason why this works is because they evaluate to the
> same value.
>
> Use the i40e-specific definition instead.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_ethdev.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
> index ca6638b32c..d57c53f661 100644
> --- a/drivers/net/intel/i40e/i40e_ethdev.h
> +++ b/drivers/net/intel/i40e/i40e_ethdev.h
> @@ -631,7 +631,7 @@ struct i40e_fdir_flex_pit {
> /* A structure used to contain extend input of flow */
> struct i40e_fdir_flow_ext {
> uint16_t vlan_tci;
> - uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
> + uint8_t flexbytes[I40E_FDIR_MAX_FLEX_LEN];
> /* It is filled by the flexible payload to match. */
> uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN];
> uint8_t raw_id;
--
Regards,
Vladimir
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v5 08/27] net/i40e: remove global pattern variable
2026-02-17 12:13 ` [PATCH v5 08/27] net/i40e: remove global pattern variable Anatoly Burakov
@ 2026-02-17 17:15 ` Medvedkin, Vladimir
0 siblings, 0 replies; 297+ messages in thread
From: Medvedkin, Vladimir @ 2026-02-17 17:15 UTC (permalink / raw)
To: Anatoly Burakov, dev, Bruce Richardson
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
On 2/17/2026 12:13 PM, Anatoly Burakov wrote:
> When parsing flow patterns, current code cleans up the pattern list by
> removing void flow items, and copies the patterns into an array. That
> array, when dealing with under 32 flow items, is allocated on the stack,
> but when the pattern is big enough, a new list is dynamically allocated.
> This allocated list is a global variable, and is allocated using
> rte_zmalloc call which seems like overkill for this use case.
>
> Remove the global variable, and replace the split behavior with
> unconditional allocation.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_flow.c | 29 +++++++++--------------------
> 1 file changed, 9 insertions(+), 20 deletions(-)
>
<snip>
--
Regards,
Vladimir
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v5 09/27] net/i40e: avoid rte malloc in tunnel set
2026-02-17 12:13 ` [PATCH v5 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
@ 2026-02-17 17:24 ` Medvedkin, Vladimir
0 siblings, 0 replies; 297+ messages in thread
From: Medvedkin, Vladimir @ 2026-02-17 17:24 UTC (permalink / raw)
To: Anatoly Burakov, dev, Bruce Richardson
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
On 2/17/2026 12:13 PM, Anatoly Burakov wrote:
> Currently, when setting tunnel configuration, we are using rte_zmalloc
> followed by an immediate rte_free. This is not needed as this memory is
> not being stored anywhere, so replace it with stack allocation.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> drivers/net/intel/i40e/i40e_ethdev.c | 124 ++++++++++++---------------
> 1 file changed, 53 insertions(+), 71 deletions(-)
>
<snip>
--
Regards,
Vladimir
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 18/27] net/iavf: decouple hash uninit from parser uninit
2026-02-16 17:23 ` Bruce Richardson
@ 2026-02-18 10:32 ` Burakov, Anatoly
0 siblings, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-18 10:32 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Vladimir Medvedkin
On 2/16/2026 6:23 PM, Bruce Richardson wrote:
> On Fri, Feb 13, 2026 at 10:26:29AM +0000, Anatoly Burakov wrote:
>> Currently, parser deinitialization will trigger removal of current RSS
>> configuration. This should not be done as part of parser deinitialization,
>> but should rather be a separate step in dev close flow.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>> drivers/net/intel/iavf/iavf_ethdev.c | 4 ++++
>> drivers/net/intel/iavf/iavf_hash.c | 13 +++++++++----
>> drivers/net/intel/iavf/iavf_hash.h | 13 +++++++++++++
>> 3 files changed, 26 insertions(+), 4 deletions(-)
>> create mode 100644 drivers/net/intel/iavf/iavf_hash.h
>>
>> diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
>> index 802e095174..70eb7e7ec5 100644
>> --- a/drivers/net/intel/iavf/iavf_ethdev.c
>> +++ b/drivers/net/intel/iavf/iavf_ethdev.c
>> @@ -35,6 +35,7 @@
>> #include "iavf_generic_flow.h"
>> #include "rte_pmd_iavf.h"
>> #include "iavf_ipsec_crypto.h"
>> +#include "iavf_hash.h"
>>
>> /* devargs */
>> #define IAVF_PROTO_XTR_ARG "proto_xtr"
>> @@ -2972,6 +2973,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
>> /* free iAVF security device context all related resources */
>> iavf_security_ctx_destroy(adapter);
>>
>> + /* remove RSS configuration */
>> + iavf_hash_uninit(adapter);
>> +
>> iavf_flow_flush(dev, NULL);
>> iavf_flow_uninit(adapter);
>>
>> diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
>> index a40fed7542..d864998402 100644
>> --- a/drivers/net/intel/iavf/iavf_hash.c
>> +++ b/drivers/net/intel/iavf/iavf_hash.c
>> @@ -22,6 +22,7 @@
>> #include "iavf_log.h"
>> #include "iavf.h"
>> #include "iavf_generic_flow.h"
>> +#include "iavf_hash.h"
>>
>> #define IAVF_PHINT_NONE 0
>> #define IAVF_PHINT_GTPU BIT_ULL(0)
>> @@ -77,7 +78,7 @@ static int
>> iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
>> struct rte_flow_error *error);
>> static void
>> -iavf_hash_uninit(struct iavf_adapter *ad);
>> +iavf_hash_uninit_parser(struct iavf_adapter *ad);
>> static void
>> iavf_hash_free(struct rte_flow *flow);
>> static int
>> @@ -680,7 +681,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
>> .init = iavf_hash_init,
>> .create = iavf_hash_create,
>> .destroy = iavf_hash_destroy,
>> - .uninit = iavf_hash_uninit,
>> + .uninit = iavf_hash_uninit_parser,
>> .free = iavf_hash_free,
>> .type = IAVF_FLOW_ENGINE_HASH,
>> };
>> @@ -1641,6 +1642,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
>> }
>>
>> static void
>> +iavf_hash_uninit_parser(struct iavf_adapter *ad)
>> +{
>> + iavf_unregister_parser(&iavf_hash_parser, ad);
>> +}
>> +
>> +void
>> iavf_hash_uninit(struct iavf_adapter *ad)
>> {
>> struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
>> @@ -1658,8 +1665,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
>> rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
>> if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
>> PMD_DRV_LOG(ERR, "fail to delete default RSS");
>> -
>> - iavf_unregister_parser(&iavf_hash_parser, ad);
>> }
>>
>> static void
>> diff --git a/drivers/net/intel/iavf/iavf_hash.h b/drivers/net/intel/iavf/iavf_hash.h
>> new file mode 100644
>> index 0000000000..2348f32673
>> --- /dev/null
>> +++ b/drivers/net/intel/iavf/iavf_hash.h
>> @@ -0,0 +1,13 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2025 Intel Corporation
>> + */
>> +
>> +#ifndef _IAVF_HASH_H_
>> +#define _IAVF_HASH_H_
>> +
>> +#include "iavf.h"
>> +
>> +void
>> +iavf_hash_uninit(struct iavf_adapter *ad);
>> +
>> +#endif /* _IAVF_HASH_H_ */
>> --
>
> While its primarily a matter of taste, do we really need to create a new
> header for this? For a single function prototype can it not just go in
> iavf.h directly itself?
>
> Either way:
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
>
This code is not touched further down the line so yes, I can move it to
iavf.h no problem.
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 19/27] net/iavf: avoid rte malloc in RSS configuration
2026-02-16 17:24 ` Bruce Richardson
@ 2026-02-18 10:45 ` Burakov, Anatoly
0 siblings, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-18 10:45 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Vladimir Medvedkin
On 2/16/2026 6:24 PM, Bruce Richardson wrote:
> On Fri, Feb 13, 2026 at 10:26:30AM +0000, Anatoly Burakov wrote:
>> Currently, when configuring RSS (redirection table, lookup table, and
>> hash key), we are using rte_zmalloc followed by an immediate rte_free.
>> This is not needed as this memory is not being stored anywhere, so
>> replace it with regular malloc/free.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>> drivers/net/intel/iavf/iavf_ethdev.c | 4 ++--
>> drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
>> 2 files changed, 6 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
>> index 70eb7e7ec5..d3fa47fd5e 100644
>> --- a/drivers/net/intel/iavf/iavf_ethdev.c
>> +++ b/drivers/net/intel/iavf/iavf_ethdev.c
>> @@ -1554,7 +1554,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
>> return -EINVAL;
>> }
>>
>> - lut = rte_zmalloc("rss_lut", reta_size, 0);
>> + lut = calloc(1, reta_size);
>
> As with i40e, can we make this (and the key allocation below) static based
> on max sizes supported?
>
It depends. Technically, IAVF does not specify "max size", it is
whatever PF reports. In practice, the biggest we're going to get (i.e.
the biggest supported by i40e and ice) is 512, so we could hardcode that.
Similarly for RSS key, technically we don't know how big an RSS key we
can support. In practice, both ice and i40e only support 52-byte keys,
so we could hardcode that.
So, on the one hand, hardcoding this goes against every fiber of my
being because "IAVF doesn't know", but on the other we *know* these
values in practice. Which voice should I be listening to? :)
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-16 17:27 ` Bruce Richardson
@ 2026-02-19 9:22 ` Burakov, Anatoly
2026-02-19 9:29 ` Bruce Richardson
2026-02-19 13:21 ` Burakov, Anatoly
1 sibling, 1 reply; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-19 9:22 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Vladimir Medvedkin
On 2/16/2026 6:27 PM, Bruce Richardson wrote:
> On Fri, Feb 13, 2026 at 10:26:31AM +0000, Anatoly Burakov wrote:
>> Currently, when adding or deleting MAC addresses, we are using
>> rte_zmalloc followed by an immediate rte_free. This is not needed as this
>> memory is not being stored anywhere, so replace it with regular
>> malloc/free.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>> drivers/net/intel/iavf/iavf_vchnl.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
>> index 55986ef909..19dce17612 100644
>> --- a/drivers/net/intel/iavf/iavf_vchnl.c
>> +++ b/drivers/net/intel/iavf/iavf_vchnl.c
>> @@ -1402,7 +1402,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
>> }
>> }
>>
>> - list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
>> + list = calloc(1, len);
>
> Given the loop above has a threshold set for IAVF_AQ_BUF_SZ, maybe a static
> buffer of that fixed size might be better?
That size is 4 kilobytes, so I agree that we can use one buffer rather
than constantly allocating/deallocating things, it'll still have to be
dynamically allocated.
>
> Also, that check itself seems a little off, since it allows buffers greater
> than the size, rather than ignoring the length of the address that pushes
> it over the limit.
Yes, this seems like an opportunity for a bugfix. I'll submit it separately.
>
>> if (!list) {
>> PMD_DRV_LOG(ERR, "fail to allocate memory");
>> return;
>> @@ -1434,7 +1434,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
>> PMD_DRV_LOG(ERR, "fail to execute command %s",
>> add ? "OP_ADD_ETHER_ADDRESS" :
>> "OP_DEL_ETHER_ADDRESS");
>> - rte_free(list);
>> + free(list);
>> begin = next_begin;
>> } while (begin < IAVF_NUM_MACADDR_MAX);
>> }
>> --
>> 2.47.3
>>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-19 9:22 ` Burakov, Anatoly
@ 2026-02-19 9:29 ` Bruce Richardson
2026-02-19 9:32 ` Bruce Richardson
0 siblings, 1 reply; 297+ messages in thread
From: Bruce Richardson @ 2026-02-19 9:29 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev, Vladimir Medvedkin
On Thu, Feb 19, 2026 at 10:22:24AM +0100, Burakov, Anatoly wrote:
> On 2/16/2026 6:27 PM, Bruce Richardson wrote:
> > On Fri, Feb 13, 2026 at 10:26:31AM +0000, Anatoly Burakov wrote:
> > > Currently, when adding or deleting MAC addresses, we are using
> > > rte_zmalloc followed by an immediate rte_free. This is not needed as this
> > > memory is not being stored anywhere, so replace it with regular
> > > malloc/free.
> > >
> > > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > ---
> > > drivers/net/intel/iavf/iavf_vchnl.c | 4 ++--
> > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
> > > index 55986ef909..19dce17612 100644
> > > --- a/drivers/net/intel/iavf/iavf_vchnl.c
> > > +++ b/drivers/net/intel/iavf/iavf_vchnl.c
> > > @@ -1402,7 +1402,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
> > > }
> > > }
> > > - list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
> > > + list = calloc(1, len);
> >
> > Given the loop above has a threshold set for IAVF_AQ_BUF_SZ, maybe a static
> > buffer of that fixed size might be better?
>
> That size is 4 kilobytes, so I agree that we can use one buffer rather than
> constantly allocating/deallocating things, it'll still have to be
> dynamically allocated.
>
I still would use a stack variable myself. 4k really isn't that big
nowadays.
/Bruce
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-19 9:29 ` Bruce Richardson
@ 2026-02-19 9:32 ` Bruce Richardson
2026-02-19 9:39 ` Burakov, Anatoly
0 siblings, 1 reply; 297+ messages in thread
From: Bruce Richardson @ 2026-02-19 9:32 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev, Vladimir Medvedkin
On Thu, Feb 19, 2026 at 09:29:40AM +0000, Bruce Richardson wrote:
> On Thu, Feb 19, 2026 at 10:22:24AM +0100, Burakov, Anatoly wrote:
> > On 2/16/2026 6:27 PM, Bruce Richardson wrote:
> > > On Fri, Feb 13, 2026 at 10:26:31AM +0000, Anatoly Burakov wrote:
> > > > Currently, when adding or deleting MAC addresses, we are using
> > > > rte_zmalloc followed by an immediate rte_free. This is not needed as this
> > > > memory is not being stored anywhere, so replace it with regular
> > > > malloc/free.
> > > >
> > > > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > > ---
> > > > drivers/net/intel/iavf/iavf_vchnl.c | 4 ++--
> > > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
> > > > index 55986ef909..19dce17612 100644
> > > > --- a/drivers/net/intel/iavf/iavf_vchnl.c
> > > > +++ b/drivers/net/intel/iavf/iavf_vchnl.c
> > > > @@ -1402,7 +1402,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
> > > > }
> > > > }
> > > > - list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
> > > > + list = calloc(1, len);
> > >
> > > Given the loop above has a threshold set for IAVF_AQ_BUF_SZ, maybe a static
> > > buffer of that fixed size might be better?
> >
> > That size is 4 kilobytes, so I agree that we can use one buffer rather than
> > constantly allocating/deallocating things, it'll still have to be
> > dynamically allocated.
> >
> I still would use a stack variable myself. 4k really isn't that big
> nowadays.
>
4k is also PATH_MAX, and we use stack arrays of PATH_MAX size everywhere in
DPDK, so I think we can assume that it's an ok size to use as a stack
variable.
/Bruce
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-19 9:32 ` Bruce Richardson
@ 2026-02-19 9:39 ` Burakov, Anatoly
0 siblings, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-19 9:39 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Vladimir Medvedkin
On 2/19/2026 10:32 AM, Bruce Richardson wrote:
> On Thu, Feb 19, 2026 at 09:29:40AM +0000, Bruce Richardson wrote:
>> On Thu, Feb 19, 2026 at 10:22:24AM +0100, Burakov, Anatoly wrote:
>>> On 2/16/2026 6:27 PM, Bruce Richardson wrote:
>>>> On Fri, Feb 13, 2026 at 10:26:31AM +0000, Anatoly Burakov wrote:
>>>>> Currently, when adding or deleting MAC addresses, we are using
>>>>> rte_zmalloc followed by an immediate rte_free. This is not needed as this
>>>>> memory is not being stored anywhere, so replace it with regular
>>>>> malloc/free.
>>>>>
>>>>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>>>> ---
>>>>> drivers/net/intel/iavf/iavf_vchnl.c | 4 ++--
>>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
>>>>> index 55986ef909..19dce17612 100644
>>>>> --- a/drivers/net/intel/iavf/iavf_vchnl.c
>>>>> +++ b/drivers/net/intel/iavf/iavf_vchnl.c
>>>>> @@ -1402,7 +1402,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
>>>>> }
>>>>> }
>>>>> - list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
>>>>> + list = calloc(1, len);
>>>>
>>>> Given the loop above has a threshold set for IAVF_AQ_BUF_SZ, maybe a static
>>>> buffer of that fixed size might be better?
>>>
>>> That size is 4 kilobytes, so I agree that we can use one buffer rather than
>>> constantly allocating/deallocating things, it'll still have to be
>>> dynamically allocated.
>>>
>> I still would use a stack variable myself. 4k really isn't that big
>> nowadays.
>>
> 4k is also PATH_MAX, and we use stack arrays of PATH_MAX size everywhere in
> DPDK, so I think we can assume that it's an ok size to use as a stack
> variable.
>
> /Bruce
Ack. While attempting to figure out this code I just uncovered a whole
bunch of other places where we could replace things with a stack
allocation :/
(not submitting them in this patchset, but there's gonna be more rework
like this in the future)
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 27/27] net/ice: avoid rte malloc in flow pattern match
2026-02-16 17:37 ` Bruce Richardson
@ 2026-02-19 13:07 ` Burakov, Anatoly
0 siblings, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-19 13:07 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
On 2/16/2026 6:37 PM, Bruce Richardson wrote:
> On Fri, Feb 13, 2026 at 10:26:38AM +0000, Anatoly Burakov wrote:
>> Currently, when allocating buffers for pattern match items and flow item
>> storage, we are using rte_zmalloc followed by immediate rte_free. This is
>> not needed as these buffers are only used temporarily within the function
>> scope, so replace it with regular calloc/free.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>> drivers/net/intel/ice/ice_acl_filter.c | 3 ++-
>> drivers/net/intel/ice/ice_fdir_filter.c | 5 +++--
>> drivers/net/intel/ice/ice_generic_flow.c | 15 +++++++--------
>> drivers/net/intel/ice/ice_hash.c | 3 ++-
>> drivers/net/intel/ice/ice_switch_filter.c | 5 +++--
>> 5 files changed, 17 insertions(+), 14 deletions(-)
>>
>> diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c
>> index 38e30a4f62..6754a40044 100644
>> --- a/drivers/net/intel/ice/ice_acl_filter.c
>> +++ b/drivers/net/intel/ice/ice_acl_filter.c
>> @@ -9,6 +9,7 @@
>> #include <string.h>
>> #include <unistd.h>
>> #include <stdarg.h>
>> +#include <stdlib.h>
>> #include <rte_debug.h>
>> #include <rte_ether.h>
>> #include <ethdev_driver.h>
>> @@ -1009,7 +1010,7 @@ ice_acl_parse(struct ice_adapter *ad,
>> *meta = filter;
>>
>> error:
>> - rte_free(item);
>> + free(item);
>> return ret;
>> }
>
> Should this code be reworked so that the error is propagated back to caller
> and the item freed there so as allocation and freeing occur together in the
> one function - or even in the same file?
>
It should, and in fact further rework is also about fixing quirks like
these. With this patch though, I tried to minimize the changes and not
touch logic, because untangling these allocations is not trivial.
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-16 17:27 ` Bruce Richardson
2026-02-19 9:22 ` Burakov, Anatoly
@ 2026-02-19 13:21 ` Burakov, Anatoly
1 sibling, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-19 13:21 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Vladimir Medvedkin
On 2/16/2026 6:27 PM, Bruce Richardson wrote:
> On Fri, Feb 13, 2026 at 10:26:31AM +0000, Anatoly Burakov wrote:
>> Currently, when adding or deleting MAC addresses, we are using
>> rte_zmalloc followed by an immediate rte_free. This is not needed as this
>> memory is not being stored anywhere, so replace it with regular
>> malloc/free.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>> drivers/net/intel/iavf/iavf_vchnl.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
>> index 55986ef909..19dce17612 100644
>> --- a/drivers/net/intel/iavf/iavf_vchnl.c
>> +++ b/drivers/net/intel/iavf/iavf_vchnl.c
>> @@ -1402,7 +1402,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
>> }
>> }
>>
>> - list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
>> + list = calloc(1, len);
>
> Given the loop above has a threshold set for IAVF_AQ_BUF_SZ, maybe a static
> buffer of that fixed size might be better?
>
> Also, that check itself seems a little off, since it allows buffers greater
> than the size, rather than ignoring the length of the address that pushes
> it over the limit.
Fun fact: this entire thing is pointless, because sizeof() of MAC
address structure is 8 bytes, there's max 64 addresses, and the buffer
is 4K so it will *never* overflow and need to be split up. I'll remove
that code.
>
>> if (!list) {
>> PMD_DRV_LOG(ERR, "fail to allocate memory");
>> return;
>> @@ -1434,7 +1434,7 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
>> PMD_DRV_LOG(ERR, "fail to execute command %s",
>> add ? "OP_ADD_ETHER_ADDRESS" :
>> "OP_DEL_ETHER_ADDRESS");
>> - rte_free(list);
>> + free(list);
>> begin = next_begin;
>> } while (begin < IAVF_NUM_MACADDR_MAX);
>> }
>> --
>> 2.47.3
>>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v6 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (15 preceding siblings ...)
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
` (27 more replies)
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (2 subsequent siblings)
19 siblings, 28 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev
This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
IXGBE:
- Remove unnecessary macros and #ifdef's
- Disentangle unrelated flow API code paths
I40E:
- Get rid of global variables and unnecessary allocations
- Reduce code duplication around default RSS keys
- Use more appropriate integer types and definitions
IAVF:
- Remove dead code
- Remove unnecessary allocations
- Separate RSS uninit from hash flow parser uninit
ICE:
- Remove unnecessary allocations
This is done in preparation for further rework.
Note that this patchset depends on driver bug fix patchset [1] (already integrated into next-net-intel).
[1] https://patches.dpdk.org/project/dpdk/list/?series=37350
v1 -> v2:
- Added more cleanups around rte_malloc usage
v2 -> v3:
- Reworded some commit messages
- Added a new patch for ICE
- Rebased on latest bug fix patches
v3 -> v4:
- Rebased on latest bugfix patchset
v4 -> v5:
- Adjusted typing for queue size
- Fixed missing zero initializations for stack allocations
v5 -> v6:
- Addressed feedback for v3, v4, and v5
- Changed more allocations to be stack based
- Reworked queue and IRQ map related i40e patches for better logic
Anatoly Burakov (27):
net/ixgbe: remove MAC type check macros
net/ixgbe: remove security-related ifdefery
net/ixgbe: split security and ntuple filters
net/i40e: get rid of global filter variables
net/i40e: make default RSS key global
net/i40e: use unsigned types for queue comparisons
net/i40e: use proper flex len define
net/i40e: remove global pattern variable
net/i40e: avoid rte malloc in tunnel set
net/i40e: avoid rte malloc in RSS RETA operations
net/i40e: avoid rte malloc in MAC/VLAN filtering
net/i40e: avoid rte malloc in VF resource queries
net/i40e: avoid rte malloc in adminq operations
net/i40e: avoid rte malloc in DDP package handling
net/i40e: avoid rte malloc in DDP ptype handling
net/iavf: remove remnants of pipeline mode
net/iavf: decouple hash uninit from parser uninit
net/iavf: avoid rte malloc in VF mailbox for IPsec
net/iavf: avoid rte malloc in RSS configuration
net/iavf: avoid rte malloc in MAC address operations
net/iavf: avoid rte malloc in IPsec operations
net/iavf: avoid rte malloc in queue operations
net/iavf: avoid rte malloc in irq map config
net/ice: avoid rte malloc in RSS RETA operations
net/ice: avoid rte malloc in MAC address operations
net/ice: avoid rte malloc in raw pattern parsing
net/ice: avoid rte malloc in flow pattern match
drivers/net/intel/i40e/i40e_ethdev.c | 370 +++++++----------
drivers/net/intel/i40e/i40e_ethdev.h | 26 +-
drivers/net/intel/i40e/i40e_flow.c | 147 ++++---
drivers/net/intel/i40e/i40e_hash.c | 27 +-
drivers/net/intel/i40e/i40e_hash.h | 3 +
drivers/net/intel/i40e/i40e_pf.c | 26 +-
drivers/net/intel/i40e/rte_pmd_i40e.c | 60 +--
drivers/net/intel/iavf/iavf.h | 7 +-
drivers/net/intel/iavf/iavf_ethdev.c | 37 +-
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 -
drivers/net/intel/iavf/iavf_hash.c | 13 +-
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 283 +++++--------
drivers/net/intel/iavf/iavf_vchnl.c | 412 ++++++++++---------
drivers/net/intel/ice/ice_acl_filter.c | 3 +-
drivers/net/intel/ice/ice_dcf_ethdev.c | 26 +-
drivers/net/intel/ice/ice_ethdev.c | 29 +-
drivers/net/intel/ice/ice_fdir_filter.c | 19 +-
drivers/net/intel/ice/ice_generic_flow.c | 15 +-
drivers/net/intel/ice/ice_hash.c | 13 +-
drivers/net/intel/ice/ice_switch_filter.c | 5 +-
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 -
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 16 -
drivers/net/intel/ixgbe/ixgbe_flow.c | 228 ++++++----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 -
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -
28 files changed, 783 insertions(+), 1041 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v6 01/27] net/ixgbe: remove MAC type check macros
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
` (26 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The macros used were not informative and did not add any value beyond code
golf, so remove them and make MAC type checks explicit.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
2 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 5dbd659941..7dc02a472b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -137,18 +137,6 @@
#define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
#define IXGBE_MAX_L2_TN_FILTER_NUM 128
-#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
- return -ENOTSUP;\
-} while (0)
-
-#define MAC_TYPE_FILTER_SUP(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
- (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
- (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
- return -ENOTSUP;\
-} while (0)
-
/* Link speed for X550 auto negotiation */
#define IXGBE_LINK_SPEED_X550_AUTONEG (IXGBE_LINK_SPEED_100_FULL | \
IXGBE_LINK_SPEED_1GB_FULL | \
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 6a7edc6377..c8d6237f27 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -654,7 +654,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP_EXT(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540)
+ return -ENOTSUP;
ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
@@ -894,7 +896,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_ethertype_filter(attr, pattern,
actions, filter, error);
@@ -1183,7 +1191,13 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_syn_filter(attr, pattern,
actions, filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 02/27] net/ixgbe: remove security-related ifdefery
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
` (25 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The security library is specified as explicit dependency for ixgbe, so
there is no more need to gate features behind #ifdef blocks that depend
on presence of this library.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 ------
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ---
drivers/net/intel/ixgbe/ixgbe_flow.c | 6 -----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --------------------
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 --
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -----
6 files changed, 52 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 11500a923c..57d929cf2c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -32,9 +32,7 @@
#include <rte_random.h>
#include <dev_driver.h>
#include <rte_hash_crc.h>
-#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
-#endif
#include <rte_os_shim.h>
#include "ixgbe_logs.h"
@@ -1177,11 +1175,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ixgbe_swfw_lock_reset(hw);
-#ifdef RTE_LIB_SECURITY
/* Initialize security_ctx only for primary process*/
if (ixgbe_ipsec_ctx_create(eth_dev))
return -ENOMEM;
-#endif
/* Initialize DCB configuration*/
memset(dcb_config, 0, sizeof(struct ixgbe_dcb_config));
@@ -1362,10 +1358,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
rte_free(eth_dev->data->hash_mac_addrs);
eth_dev->data->hash_mac_addrs = NULL;
err_exit:
-#ifdef RTE_LIB_SECURITY
rte_free(eth_dev->security_ctx);
eth_dev->security_ctx = NULL;
-#endif
return ret;
}
@@ -3148,10 +3142,8 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(dev);
-#ifdef RTE_LIB_SECURITY
rte_free(dev->security_ctx);
dev->security_ctx = NULL;
-#endif
return ret;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 7dc02a472b..32d7b98ed1 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -14,9 +14,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
-#ifdef RTE_LIB_SECURITY
#include "ixgbe_ipsec.h"
-#endif
#include <rte_flow.h>
#include <rte_time.h>
#include <rte_hash.h>
@@ -480,9 +478,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-#ifdef RTE_LIB_SECURITY
struct ixgbe_ipsec ipsec;
-#endif
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index c8d6237f27..491e8bccc5 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,7 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
-#ifdef RTE_LIB_SECURITY
/**
* Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
*/
@@ -282,7 +281,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
}
-#endif
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -663,11 +661,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (filter->proto == IPPROTO_ESP)
return 0;
-#endif
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
@@ -3107,7 +3103,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (ntuple_filter.proto == IPPROTO_ESP) {
if (ret != 0)
@@ -3115,7 +3110,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
flow->is_security = true;
return flow;
}
-#endif
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
index 2857c19355..71deda9ed6 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
@@ -459,7 +459,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
-#ifdef RTE_LIB_SECURITY
if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
union ixgbe_crypto_tx_desc_md *md =
(union ixgbe_crypto_tx_desc_md *)mdata;
@@ -473,7 +472,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
tx_offload_mask.sa_idx |= ~0;
tx_offload_mask.sec_pad_len |= ~0;
}
-#endif
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -652,9 +650,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec;
-#endif
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -682,9 +678,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
-#ifdef RTE_LIB_SECURITY
use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
-#endif
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -696,7 +690,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
-#ifdef RTE_LIB_SECURITY
if (use_ipsec) {
union ixgbe_crypto_tx_desc_md *ipsec_mdata =
(union ixgbe_crypto_tx_desc_md *)
@@ -704,7 +697,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.sa_idx = ipsec_mdata->sa_idx;
tx_offload.sec_pad_len = ipsec_mdata->pad_len;
}
-#endif
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -896,10 +888,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
-#ifdef RTE_LIB_SECURITY
if (use_ipsec)
olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
-#endif
m_seg = tx_pkt;
do {
@@ -1523,13 +1513,11 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
}
-#ifdef RTE_LIB_SECURITY
if (rx_status & IXGBE_RXD_STAT_SECP) {
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
}
-#endif
return pkt_flags;
}
@@ -2490,9 +2478,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST) {
if (txq->tx_rs_thresh <= IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
@@ -2648,9 +2634,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
(txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = rte_eth_tx_pkt_prepare_dummy;
@@ -2711,10 +2695,8 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
-#endif
return tx_offload_capa;
}
@@ -2884,10 +2866,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
-#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
RTE_ETH_TX_OFFLOAD_SECURITY);
-#endif
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -3171,10 +3151,8 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
-#endif
return offloads;
}
@@ -5102,10 +5080,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ci_rx_queue *rxq = dev->data->rx_queues[i];
rxq->vector_rx = rx_using_sse;
-#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY);
-#endif
}
}
@@ -5611,7 +5587,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
ixgbe_setup_loopback_link_x540_x550(hw, true);
}
-#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
@@ -5624,7 +5599,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
return ret;
}
}
-#endif
return 0;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.h b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
index 7950e56ee4..33023a3580 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
@@ -99,11 +99,9 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
-#ifdef RTE_LIB_SECURITY
/* inline ipsec related*/
uint64_t sa_idx:8; /**< TX SA database entry index */
uint64_t sec_pad_len:4; /**< padding length */
-#endif
};
};
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
index dca3a20ca0..3f37038e5c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -21,7 +21,6 @@ ixgbe_rxq_rearm(struct ci_rx_queue *rxq)
ci_rxq_rearm(rxq, CI_RX_VEC_LEVEL_SSE);
}
-#ifdef RTE_LIB_SECURITY
static inline void
desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
@@ -56,7 +55,6 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
*rearm2 = _mm_extract_epi32(rearm, 2);
*rearm3 = _mm_extract_epi32(rearm, 3);
}
-#endif
static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
@@ -265,9 +263,7 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ci_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec = rxq->using_ipsec;
-#endif
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -444,10 +440,8 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
desc_to_olflags_v(descs, mbuf_init, vlan_flags, udp_p_flag,
&rx_pkts[pos]);
-#ifdef RTE_LIB_SECURITY
if (unlikely(use_ipsec))
desc_to_olflags_v_ipsec(descs, &rx_pkts[pos]);
-#endif
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 03/27] net/ixgbe: split security and ntuple filters
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
` (24 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
These filters are mashed together even though they almost do not share any
code at all between each other. Separate security filter from ntuple filter
and parse it separately.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 202 ++++++++++++++++-----------
1 file changed, 122 insertions(+), 80 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 491e8bccc5..01cd4f9bde 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,74 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- act = next_no_void_action(actions, NULL);
- if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- const void *conf = act->conf;
- const struct rte_flow_action_security *sec_act;
- struct rte_security_session *session;
- struct ip_spec spec;
-
- if (conf == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- act, "NULL security conf.");
- return -rte_errno;
- }
- /* check if the next not void item is END */
- act = next_no_void_action(actions, act);
- if (act->type != RTE_FLOW_ACTION_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "Not supported action.");
- return -rte_errno;
- }
-
- /* get the IP pattern*/
- item = next_no_void_pattern(pattern, NULL);
- while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- if (item->last ||
- item->type == RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "IP pattern missing.");
- return -rte_errno;
- }
- item = next_no_void_pattern(pattern, item);
- }
- if (item->spec == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
- "NULL IP pattern.");
- return -rte_errno;
- }
-
- filter->proto = IPPROTO_ESP;
- sec_act = (const struct rte_flow_action_security *)conf;
- spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
- if (spec.is_ipv6) {
- const struct rte_flow_item_ipv6 *ipv6 = item->spec;
- spec.spec.ipv6 = *ipv6;
- } else {
- const struct rte_flow_item_ipv4 *ipv4 = item->spec;
- spec.spec.ipv4 = *ipv4;
- }
-
- /*
- * we get pointer to security session from security action,
- * which is const. however, we do need to act on the session, so
- * either we do some kind of pointer based lookup to get session
- * pointer internally (which quickly gets unwieldy for lots of
- * flows case), or we simply cast away constness.
- */
- session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
- return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
- }
-
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -640,6 +572,112 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return 0;
}
+static int
+ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct rte_flow_action_security *security;
+ struct rte_security_session *session;
+ const struct rte_flow_item *item;
+ const struct rte_flow_action *act;
+ struct ip_spec spec;
+ int ret;
+
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error,
+ EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "NULL pattern.");
+ return -rte_errno;
+ }
+ if (actions == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "NULL action.");
+ return -rte_errno;
+ }
+ if (attr == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR,
+ NULL, "NULL attribute.");
+ return -rte_errno;
+ }
+
+ /* check if next non-void action is security */
+ act = next_no_void_action(actions, NULL);
+ if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+ security = act->conf;
+ if (security == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "NULL security action config.");
+ }
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+ if (item->spec == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
+ "NULL IP pattern.");
+ return -rte_errno;
+ }
+ spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
+ if (spec.is_ipv6) {
+ const struct rte_flow_item_ipv6 *ipv6 = item->spec;
+ spec.spec.ipv6 = *ipv6;
+ } else {
+ const struct rte_flow_item_ipv4 *ipv4 = item->spec;
+ spec.spec.ipv4 = *ipv4;
+ }
+
+ /*
+ * we get pointer to security session from security action, which is
+ * const. however, we do need to act on the session, so either we do
+ * some kind of pointer based lookup to get session pointer internally
+ * (which quickly gets unwieldy for lots of flows case), or we simply
+ * cast away constness. the latter path was chosen.
+ */
+ session = RTE_CAST_PTR(struct rte_security_session *, security->security_session);
+ ret = ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
+ if (ret) {
+ rte_flow_error_set(error, -ret,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "Failed to add security session.");
+ return -rte_errno;
+ }
+ return 0;
+}
+
/* a specific function for ixgbe because the flags is specific */
static int
ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
@@ -661,10 +699,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- /* ESP flow not really a flow*/
- if (filter->proto == IPPROTO_ESP)
- return 0;
-
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -3099,18 +3133,19 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
-
- /* ESP flow not really a flow*/
- if (ntuple_filter.proto == IPPROTO_ESP) {
- if (ret != 0)
- goto out;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret) {
flow->is_security = true;
return flow;
}
+ memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
+ actions, &ntuple_filter, error);
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
@@ -3334,6 +3369,13 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
+ return 0;
+
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 04/27] net/i40e: get rid of global filter variables
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (2 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 05/27] net/i40e: make default RSS key global Anatoly Burakov
` (23 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, i40e driver relies on global state to work around the fact that
`rte_flow_validate()` is being called directly from `rte_flow_create()`,
and it not being possible to pass state between two functions. Fix that by
making a small wrapper around validation that will create a dummy context.
Additionally, tunnel filter doesn't appear to be used by anything and so is
omitted from the structure.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 16 ++--
drivers/net/intel/i40e/i40e_flow.c | 117 ++++++++++++++-------------
2 files changed, 68 insertions(+), 65 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index cab6d7e9dc..0de036f2d9 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1304,12 +1304,14 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
-union i40e_filter_t {
- struct rte_eth_ethertype_filter ethertype_filter;
- struct i40e_fdir_filter_conf fdir_filter;
- struct rte_eth_tunnel_filter_conf tunnel_filter;
- struct i40e_tunnel_filter_conf consistent_tunnel_filter;
- struct i40e_rte_flow_rss_conf rss_conf;
+struct i40e_filter_ctx {
+ union {
+ struct rte_eth_ethertype_filter ethertype_filter;
+ struct i40e_fdir_filter_conf fdir_filter;
+ struct i40e_tunnel_filter_conf consistent_tunnel_filter;
+ struct i40e_rte_flow_rss_conf rss_conf;
+ };
+ enum rte_filter_type type;
};
typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
@@ -1317,7 +1319,7 @@ typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
struct i40e_valid_pattern {
enum rte_flow_item_type *items;
parse_filter_t parse_filter;
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2374b9bbca..e611de0c06 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -80,37 +80,37 @@ static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
struct i40e_ethertype_filter *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
@@ -124,7 +124,7 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int
i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
@@ -136,7 +136,7 @@ static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
const struct rte_flow_ops i40e_flow_ops = {
.validate = i40e_flow_validate,
.create = i40e_flow_create,
@@ -145,8 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-static union i40e_filter_t cons_filter;
-static enum rte_filter_type cons_filter_type = RTE_ETH_FILTER_NONE;
/* internal pattern w/o VOID items */
struct rte_flow_item g_items[32];
@@ -1454,10 +1452,9 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct rte_eth_ethertype_filter *ethertype_filter =
- &filter->ethertype_filter;
+ struct rte_eth_ethertype_filter *ethertype_filter = &filter->ethertype_filter;
int ret;
ret = i40e_flow_parse_ethertype_pattern(dev, pattern, error,
@@ -1474,7 +1471,7 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_ETHERTYPE;
+ filter->type = RTE_ETH_FILTER_ETHERTYPE;
return ret;
}
@@ -2549,7 +2546,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
struct i40e_fdir_filter_conf *fdir_filter = &filter->fdir_filter;
int ret;
@@ -2566,7 +2563,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_FDIR;
+ filter->type = RTE_ETH_FILTER_FDIR;
return 0;
}
@@ -2834,10 +2831,9 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_l4_pattern(pattern, error, tunnel_filter);
@@ -2852,7 +2848,7 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3086,10 +3082,9 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_vxlan_pattern(dev, pattern,
@@ -3105,7 +3100,7 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3338,10 +3333,9 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_nvgre_pattern(dev, pattern,
@@ -3357,7 +3351,7 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3495,10 +3489,9 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_mpls_pattern(dev, pattern,
@@ -3514,7 +3507,7 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3648,10 +3641,9 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_gtp_pattern(dev, pattern,
@@ -3667,7 +3659,7 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3766,10 +3758,9 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_qinq_pattern(dev, pattern,
@@ -3785,16 +3776,17 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
static int
-i40e_flow_validate(struct rte_eth_dev *dev,
+i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
+ struct i40e_filter_ctx *filter_ctx,
struct rte_flow_error *error)
{
struct rte_flow_item *items; /* internal pattern w/o VOID items */
@@ -3823,7 +3815,6 @@ i40e_flow_validate(struct rte_eth_dev *dev,
NULL, "NULL attribute.");
return -rte_errno;
}
- memset(&cons_filter, 0, sizeof(cons_filter));
/* Get the non-void item of action */
while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID)
@@ -3834,9 +3825,8 @@ i40e_flow_validate(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_HASH;
- return i40e_hash_parse(dev, pattern, actions + i,
- &cons_filter.rss_conf, error);
+ filter_ctx->type = RTE_ETH_FILTER_HASH;
+ return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error);
}
i = 0;
@@ -3878,8 +3868,7 @@ i40e_flow_validate(struct rte_eth_dev *dev,
}
if (parse_filter)
- ret = parse_filter(dev, attr, items, actions,
- error, &cons_filter);
+ ret = parse_filter(dev, attr, items, actions, error, filter_ctx);
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
@@ -3890,6 +3879,19 @@ i40e_flow_validate(struct rte_eth_dev *dev,
return ret;
}
+static int
+i40e_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ /* creates dummy context */
+ struct i40e_filter_ctx filter_ctx = {0};
+
+ return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
+}
+
static struct rte_flow *
i40e_flow_create(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -3898,15 +3900,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_filter_ctx filter_ctx = {0};
struct rte_flow *flow = NULL;
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
- ret = i40e_flow_validate(dev, attr, pattern, actions, error);
+ ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
if (ret < 0)
return NULL;
- if (cons_filter_type == RTE_ETH_FILTER_FDIR) {
+ if (filter_ctx.type == RTE_ETH_FILTER_FDIR) {
/* if this is the first time we're creating an fdir flow */
if (pf->fdir.fdir_vsi == NULL) {
ret = i40e_fdir_setup(pf);
@@ -3947,18 +3950,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
}
- switch (cons_filter_type) {
+ switch (filter_ctx.type) {
case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_set(pf,
- &cons_filter.ethertype_filter, 1);
+ ret = i40e_ethertype_filter_set(pf, &filter_ctx.ethertype_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->ethertype.ethertype_list,
i40e_ethertype_filter_list);
break;
case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev,
- &cons_filter.fdir_filter, 1);
+ ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
@@ -3966,14 +3967,14 @@ i40e_flow_create(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_dev_consistent_tunnel_filter_set(pf,
- &cons_filter.consistent_tunnel_filter, 1);
+ &filter_ctx.consistent_tunnel_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->tunnel.tunnel_list,
i40e_tunnel_filter_list);
break;
case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_create(pf, &cons_filter.rss_conf);
+ ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->rss_config_list,
@@ -3983,7 +3984,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
goto free_flow;
}
- flow->filter_type = cons_filter_type;
+ flow->filter_type = filter_ctx.type;
TAILQ_INSERT_TAIL(&pf->flow_list, flow, node);
return flow;
@@ -3992,7 +3993,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
"Failed to create flow.");
- if (cons_filter_type != RTE_ETH_FILTER_FDIR)
+ if (filter_ctx.type != RTE_ETH_FILTER_FDIR)
rte_free(flow);
else
i40e_fdir_entry_pool_put(fdir_info, flow);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 05/27] net/i40e: make default RSS key global
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (3 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
` (22 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, there are multiple places where we need a default RSS key, but
in each of those places we define it as a local variable. Make it global
constant, and adjust all callers to use the global constant. When dealing
with adminq, we cannot send down the constant because adminq commands do
not guarantee const-ness, so copy RSS key into a local buffer before
sending it down to hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 22 ++++++++++------------
drivers/net/intel/i40e/i40e_hash.c | 23 +++++++++++++++++------
drivers/net/intel/i40e/i40e_hash.h | 3 +++
3 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index c8153f3351..2deb87b01b 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9082,23 +9082,21 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
int
i40e_pf_reset_rss_key(struct i40e_pf *pf)
{
- const uint8_t key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- uint8_t *rss_key;
+ uint8_t key_buf[I40E_RSS_KEY_LEN];
+ const uint8_t *rss_key;
/* Reset key */
rss_key = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key;
- if (!rss_key ||
- pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < key_len) {
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+ if (!rss_key || pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < sizeof(key_buf))
+ rss_key = i40e_rss_key_default;
- rss_key = (uint8_t *)rss_key_default;
- }
+ /*
+ * adminq does not guarantee const-ness of RSS key once a command is sent down, so make a
+ * local copy.
+ */
+ memcpy(&key_buf, rss_key, sizeof(key_buf));
- return i40e_set_rss_key(pf->main_vsi, rss_key, key_len);
+ return i40e_set_rss_key(pf->main_vsi, key_buf, sizeof(key_buf));
}
static int
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index 3149682197..f20b40e7d0 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -233,6 +233,22 @@ struct i40e_hash_match_pattern {
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+const uint8_t i40e_rss_key_default[] = {
+ 0x44, 0x39, 0x79, 0x6b,
+ 0xb5, 0x4c, 0x50, 0x23,
+ 0xb6, 0x75, 0xea, 0x5b,
+ 0x12, 0x4f, 0x9f, 0x30,
+ 0xb8, 0xa2, 0xc0, 0x3d,
+ 0xdf, 0xdc, 0x4d, 0x02,
+ 0xa0, 0x8c, 0x9b, 0x33,
+ 0x4a, 0xf6, 0x4a, 0x4c,
+ 0x05, 0xc6, 0xfa, 0x34,
+ 0x39, 0x58, 0xd8, 0x55,
+ 0x7d, 0x99, 0x58, 0x3a,
+ 0xe1, 0x38, 0xc9, 0x2e,
+ 0x81, 0x15, 0x03, 0x66
+};
+
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
*/
@@ -910,17 +926,12 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act,
const uint8_t *key = rss_act->key;
if (!key || rss_act->key_len != sizeof(rss_conf->key)) {
- const uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
-
if (rss_act->key_len != sizeof(rss_conf->key))
PMD_DRV_LOG(WARNING,
"RSS key length invalid, must be %u bytes, now set key to default",
(uint32_t)sizeof(rss_conf->key));
- memcpy(rss_conf->key, rss_key_default, sizeof(rss_conf->key));
+ memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key));
} else {
memcpy(rss_conf->key, key, sizeof(rss_conf->key));
}
diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h
index ff8c91c030..2513d84565 100644
--- a/drivers/net/intel/i40e/i40e_hash.h
+++ b/drivers/net/intel/i40e/i40e_hash.h
@@ -27,6 +27,9 @@ int i40e_hash_filter_destroy(struct i40e_pf *pf,
const struct i40e_rss_filter *rss_filter);
int i40e_hash_filter_flush(struct i40e_pf *pf);
+#define I40E_RSS_KEY_LEN ((I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t))
+extern const uint8_t i40e_rss_key_default[I40E_RSS_KEY_LEN];
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 06/27] net/i40e: use unsigned types for queue comparisons
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (4 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 05/27] net/i40e: make default RSS key global Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 07/27] net/i40e: use proper flex len define Anatoly Burakov
` (21 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when we compare queue numbers against maximum traffic class
value of 64, we do not use unsigned values, which results in compiler
warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
value. Make it unsigned 16-bit, and adjust callers to use correct types.
As a consequence, `i40e_align_floor` now returns unsigned value as well -
this is correct, because nothing about that function implies signed usage
being a valid use case.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 11 ++++++-----
drivers/net/intel/i40e/i40e_ethdev.h | 8 ++++----
drivers/net/intel/i40e/i40e_hash.c | 4 ++--
3 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 2deb87b01b..608a6cff4d 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8978,11 +8978,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
/* Calculate the maximum number of contiguous PF queues that are configured */
-int
+uint16_t
i40e_pf_calc_configured_queues_num(struct i40e_pf *pf)
{
struct rte_eth_dev_data *data = pf->dev_data;
- int i, num;
+ int i;
+ uint16_t num;
struct ci_rx_queue *rxq;
num = 0;
@@ -9058,7 +9059,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
struct i40e_hw *hw = &pf->adapter->hw;
uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
- int num;
+ uint16_t num;
/* If both VMDQ and RSS enabled, not all of PF queues are
* configured. It's necessary to calculate the actual PF
@@ -9074,7 +9075,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
return 0;
for (i = 0; i < hw->func_caps.rss_table_size; i++)
- lut[i] = (uint8_t)(i % (uint32_t)num);
+ lut[i] = (uint8_t)(i % num);
return i40e_set_rss_lut(pf->main_vsi, lut, (uint16_t)i);
}
@@ -10771,7 +10772,7 @@ i40e_vsi_update_queue_mapping(struct i40e_vsi *vsi,
PMD_INIT_LOG(ERR, " number of queues is less that tcs.");
return I40E_ERR_INVALID_QP_ID;
}
- qpnum_per_tc = RTE_MIN(i40e_align_floor(qpnum_per_tc),
+ qpnum_per_tc = RTE_MIN((uint16_t)i40e_align_floor(qpnum_per_tc),
I40E_MAX_Q_PER_TC);
bsf = rte_bsf32(qpnum_per_tc);
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 0de036f2d9..ca6638b32c 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -24,7 +24,7 @@
#define I40E_AQ_LEN 32
#define I40E_AQ_BUF_SZ 4096
/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
-#define I40E_MAX_Q_PER_TC 64
+#define I40E_MAX_Q_PER_TC UINT16_C(64)
#define I40E_NUM_DESC_DEFAULT 512
#define I40E_NUM_DESC_ALIGN 32
#define I40E_BUF_SIZE_MIN 1024
@@ -1456,7 +1456,7 @@ int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
void i40e_flex_payload_reg_set_default(struct i40e_hw *hw);
void i40e_pf_disable_rss(struct i40e_pf *pf);
-int i40e_pf_calc_configured_queues_num(struct i40e_pf *pf);
+uint16_t i40e_pf_calc_configured_queues_num(struct i40e_pf *pf);
int i40e_pf_reset_rss_reta(struct i40e_pf *pf);
int i40e_pf_reset_rss_key(struct i40e_pf *pf);
int i40e_pf_config_rss(struct i40e_pf *pf);
@@ -1517,8 +1517,8 @@ i40e_init_adminq_parameter(struct i40e_hw *hw)
hw->aq.asq_buf_size = I40E_AQ_BUF_SZ;
}
-static inline int
-i40e_align_floor(int n)
+static inline uint32_t
+i40e_align_floor(uint32_t n)
{
if (n == 0)
return 0;
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index f20b40e7d0..5756ebf255 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -949,7 +949,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
struct i40e_pf *pf;
struct i40e_hw *hw;
uint16_t i;
- int max_queue;
+ uint16_t max_queue;
hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (!rss_act->queue_num ||
@@ -971,7 +971,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
for (i = 0; i < rss_act->queue_num; i++) {
- if ((int)rss_act->queue[i] >= max_queue)
+ if (rss_act->queue[i] >= max_queue)
break;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 07/27] net/i40e: use proper flex len define
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (5 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 08/27] net/i40e: remove global pattern variable Anatoly Burakov
` (20 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
For FDIR, we have byte arrays that are supposed to be limited to whatever
the HW supports in terms of flex descriptor matching. However, in the
structure definition, spec and mask bytes are using different array length
defines, and the only reason why this works is because they evaluate to the
same value.
Use the i40e-specific definition instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index ca6638b32c..d57c53f661 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -631,7 +631,7 @@ struct i40e_fdir_flex_pit {
/* A structure used to contain extend input of flow */
struct i40e_fdir_flow_ext {
uint16_t vlan_tci;
- uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
+ uint8_t flexbytes[I40E_FDIR_MAX_FLEX_LEN];
/* It is filled by the flexible payload to match. */
uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN];
uint8_t raw_id;
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 08/27] net/i40e: remove global pattern variable
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (6 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 07/27] net/i40e: use proper flex len define Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
` (19 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
When parsing flow patterns, current code cleans up the pattern list by
removing void flow items, and copies the patterns into an array. That
array, when dealing with under 32 flow items, is allocated on the stack,
but when the pattern is big enough, a new list is dynamically allocated.
This allocated list is a global variable, and is allocated using
rte_zmalloc call which seems like overkill for this use case.
Remove the global variable, and replace the split behavior with
unconditional allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 30 ++++++++++--------------------
1 file changed, 10 insertions(+), 20 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index e611de0c06..2791139e59 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -145,9 +146,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-/* internal pattern w/o VOID items */
-struct rte_flow_item g_items[32];
-
/* Pattern matched ethertype filter */
static enum rte_flow_item_type pattern_ethertype[] = {
RTE_FLOW_ITEM_TYPE_ETH,
@@ -3837,19 +3835,13 @@ i40e_flow_check(struct rte_eth_dev *dev,
i++;
}
item_num++;
-
- if (item_num <= ARRAY_SIZE(g_items)) {
- items = g_items;
- } else {
- items = rte_zmalloc("i40e_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
- if (!items) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL,
- "No memory for PMD internal items.");
- return -ENOMEM;
- }
+ items = calloc(item_num, sizeof(struct rte_flow_item));
+ if (items == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL,
+ "No memory for PMD internal items.");
+ return -ENOMEM;
}
i40e_pattern_skip_void_item(items, pattern);
@@ -3862,8 +3854,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- if (items != g_items)
- rte_free(items);
+ free(items);
return -rte_errno;
}
@@ -3873,8 +3864,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
- if (items != g_items)
- rte_free(items);
+ free(items);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 09/27] net/i40e: avoid rte malloc in tunnel set
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (7 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 08/27] net/i40e: remove global pattern variable Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (18 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when setting tunnel configuration, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory and the allocation size is pretty small, so replace it
with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 124 ++++++++++++---------------
1 file changed, 53 insertions(+), 71 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 608a6cff4d..d3404d7720 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8511,38 +8511,27 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
struct i40e_pf_vf *vf = NULL;
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_vsi *vsi;
- struct i40e_aqc_cloud_filters_element_bb *cld_filter;
- struct i40e_aqc_cloud_filters_element_bb *pfilter;
+ struct i40e_aqc_cloud_filters_element_bb cld_filter = {0};
struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *tunnel, *node;
+ struct i40e_tunnel_filter *node;
struct i40e_tunnel_filter check_filter; /* Check if filter exists */
uint32_t teid_le;
bool big_buffer = 0;
- cld_filter = rte_zmalloc("tunnel_filter",
- sizeof(struct i40e_aqc_add_rm_cloud_filt_elem_ext),
- 0);
-
- if (cld_filter == NULL) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- return -ENOMEM;
- }
- pfilter = cld_filter;
-
rte_ether_addr_copy(&tunnel_filter->outer_mac,
- (struct rte_ether_addr *)&pfilter->element.outer_mac);
+ (struct rte_ether_addr *)&cld_filter.element.outer_mac);
rte_ether_addr_copy(&tunnel_filter->inner_mac,
- (struct rte_ether_addr *)&pfilter->element.inner_mac);
+ (struct rte_ether_addr *)&cld_filter.element.inner_mac);
- pfilter->element.inner_vlan =
+ cld_filter.element.inner_vlan =
rte_cpu_to_le_16(tunnel_filter->inner_vlan);
if (tunnel_filter->ip_type == I40E_TUNNEL_IPTYPE_IPV4) {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v4.data,
&ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
+ sizeof(cld_filter.element.ipaddr.v4.data));
} else {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
for (i = 0; i < 4; i++) {
@@ -8550,9 +8539,9 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
rte_cpu_to_le_32(rte_be_to_cpu_32(
tunnel_filter->ip_addr.ipv6_addr[i]));
}
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v6.data,
&convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
+ sizeof(cld_filter.element.ipaddr.v6.data));
}
/* check tunneled type */
@@ -8573,11 +8562,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x40;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOUDP;
@@ -8589,11 +8578,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x0;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOGRE;
@@ -8605,11 +8594,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8620,11 +8609,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8641,8 +8630,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
* Big Buffer should be set, see changes in
* i40e_aq_add_cloud_filters
*/
- pfilter->general_fields[0] = tunnel_filter->inner_vlan;
- pfilter->general_fields[1] = tunnel_filter->outer_vlan;
+ cld_filter.general_fields[0] = tunnel_filter->inner_vlan;
+ cld_filter.general_fields[1] = tunnel_filter->outer_vlan;
big_buffer = 1;
break;
case I40E_CLOUD_TYPE_UDP:
@@ -8657,20 +8646,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->sport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
} else {
@@ -8682,20 +8671,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->dport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
}
@@ -8704,48 +8693,46 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
default:
/* Other tunnel types is not supported. */
PMD_DRV_LOG(ERR, "tunnel type is not supported.");
- rte_free(cld_filter);
return -EINVAL;
}
if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoUDP)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoGRE)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPC)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPU)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_QINQ)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_SCTP) {
if (tunnel_filter->l4_port_type == I40E_L4_PORT_TYPE_SRC)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
} else {
val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
- &pfilter->element.flags);
+ &cld_filter.element.flags);
if (val < 0) {
- rte_free(cld_filter);
return -EINVAL;
}
}
- pfilter->element.flags |= rte_cpu_to_le_16(
- I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE |
- ip_type | (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
- pfilter->element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->element.queue_number =
+ cld_filter.element.flags |=
+ rte_cpu_to_le_16(I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+ (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
+ cld_filter.element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
+ cld_filter.element.queue_number =
rte_cpu_to_le_16(tunnel_filter->queue_id);
if (!tunnel_filter->is_to_vf)
@@ -8753,7 +8740,6 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
else {
if (tunnel_filter->vf_id >= pf->vf_num) {
PMD_DRV_LOG(ERR, "Invalid argument.");
- rte_free(cld_filter);
return -EINVAL;
}
vf = &pf->vfs[tunnel_filter->vf_id];
@@ -8762,38 +8748,36 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
/* Check if there is the filter in SW list */
memset(&check_filter, 0, sizeof(check_filter));
- i40e_tunnel_filter_convert(cld_filter, &check_filter);
+ i40e_tunnel_filter_convert(&cld_filter, &check_filter);
check_filter.is_to_vf = tunnel_filter->is_to_vf;
check_filter.vf_id = tunnel_filter->vf_id;
node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &check_filter.input);
if (add && node) {
PMD_DRV_LOG(ERR, "Conflict with existing tunnel rules!");
- rte_free(cld_filter);
return -EINVAL;
}
if (!add && !node) {
PMD_DRV_LOG(ERR, "There's no corresponding tunnel filter!");
- rte_free(cld_filter);
return -EINVAL;
}
if (add) {
+ struct i40e_tunnel_filter *tunnel;
+
if (big_buffer)
ret = i40e_aq_add_cloud_filters_bb(hw,
- vsi->seid, cld_filter, 1);
+ vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_add_cloud_filters(hw,
- vsi->seid, &cld_filter->element, 1);
+ vsi->seid, &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to add a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
tunnel = rte_zmalloc("tunnel_filter", sizeof(*tunnel), 0);
if (tunnel == NULL) {
PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- rte_free(cld_filter);
return -ENOMEM;
}
@@ -8804,19 +8788,17 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
} else {
if (big_buffer)
ret = i40e_aq_rem_cloud_filters_bb(
- hw, vsi->seid, cld_filter, 1);
+ hw, vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter->element, 1);
+ &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to delete a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
ret = i40e_sw_tunnel_filter_del(pf, &node->input);
}
- rte_free(cld_filter);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 10/27] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (8 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
` (17 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This memory does
not need to be stored in hugepage memory and the maximum LUT size is 512
bytes, so replace it with stack-based allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 25 ++++---------------------
1 file changed, 4 insertions(+), 21 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index d3404d7720..d1f8e12689 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4619,7 +4619,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512] = {0};
int ret;
if (reta_size != lut_size ||
@@ -4630,14 +4630,9 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
ret = i40e_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
shift = i % RTE_ETH_RETA_GROUP_SIZE;
@@ -4648,9 +4643,6 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
pf->adapter->rss_reta_updated = 1;
-out:
- rte_free(lut);
-
return ret;
}
@@ -4662,7 +4654,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512] = {0};
int ret;
if (reta_size != lut_size ||
@@ -4673,15 +4665,9 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
-
ret = i40e_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
shift = i % RTE_ETH_RETA_GROUP_SIZE;
@@ -4689,9 +4675,6 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
reta_conf[idx].reta[shift] = lut[i];
}
-out:
- rte_free(lut);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (9 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
` (16 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding, removing, or configuring MAC and VLAN filters,
we are using rte_zmalloc followed by an immediate rte_free. This memory
does not need to be stored in hugepage memory, so replace it with regular
malloc/free or stack allocation where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 135 +++++++++++---------------
drivers/net/intel/i40e/rte_pmd_i40e.c | 17 ++--
2 files changed, 65 insertions(+), 87 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index d1f8e12689..72ff2ea59c 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6,6 +6,7 @@
#include <stdio.h>
#include <errno.h>
#include <stdint.h>
+#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
@@ -4128,7 +4129,6 @@ static int
i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_mac_filter_info *mac_filter;
struct i40e_vsi *vsi = pf->main_vsi;
struct rte_eth_rxmode *rxmode;
struct i40e_mac_filter *f;
@@ -4163,12 +4163,12 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ struct i40e_mac_filter_info mac_filter[I40E_NUM_MACADDR_MAX] = {0};
i = 0;
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
- if (mac_filter == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
+
+ if (num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Too many MAC addresses");
return I40E_ERR_NO_MEMORY;
}
@@ -4206,7 +4206,6 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (ret)
PMD_DRV_LOG(ERR, "i40e vsi add mac fail.");
}
- rte_free(mac_filter);
}
if (mask & RTE_ETH_QINQ_STRIP_MASK) {
@@ -6200,7 +6199,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
int i, num;
struct i40e_mac_filter *f;
void *temp;
- struct i40e_mac_filter_info *mac_filter;
+ struct i40e_mac_filter_info mac_filter[I40E_NUM_MACADDR_MAX] = {0};
enum i40e_mac_filter_type desired_filter;
int ret = I40E_SUCCESS;
@@ -6213,12 +6212,9 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
}
num = vsi->mac_num;
-
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
- if (mac_filter == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Too many MAC addresses");
+ return -1;
}
i = 0;
@@ -6230,7 +6226,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
if (ret) {
PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan filter",
on ? "enable" : "disable");
- goto DONE;
+ return ret;
}
i++;
}
@@ -6242,13 +6238,11 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
if (ret) {
PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan filter",
on ? "enable" : "disable");
- goto DONE;
+ return ret;
}
}
-DONE:
- rte_free(mac_filter);
- return ret;
+ return 0;
}
/* Configure vlan stripping on or off */
@@ -7130,19 +7124,20 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
uint16_t flags;
int ret = I40E_SUCCESS;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- struct i40e_aqc_add_macvlan_element_data *req_list;
+ uint8_t aq_buff[I40E_AQ_BUF_SZ] = {0};
+ struct i40e_aqc_add_macvlan_element_data *req_list =
+ (struct i40e_aqc_add_macvlan_element_data *)aq_buff;
+
+ if (hw->aq.asq_buf_size > I40E_AQ_BUF_SZ) {
+ PMD_DRV_LOG(ERR, "AdminQ size biffer than max");
+ return I40E_ERR_NO_MEMORY;
+ }
if (filter == NULL || total == 0)
return I40E_ERR_PARAM;
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0);
- if (req_list == NULL) {
- PMD_DRV_LOG(ERR, "Fail to allocate memory");
- return I40E_ERR_NO_MEMORY;
- }
-
num = 0;
do {
actual_num = (num + ele_num > total) ? (total - num) : ele_num;
@@ -7171,8 +7166,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
break;
default:
PMD_DRV_LOG(ERR, "Invalid MAC match type");
- ret = I40E_ERR_PARAM;
- goto DONE;
+ return I40E_ERR_PARAM;
}
req_list[i].queue_number = 0;
@@ -7184,14 +7178,11 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
actual_num, NULL);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to add macvlan filter");
- goto DONE;
+ return ret;
}
num += actual_num;
} while (num < total);
-
-DONE:
- rte_free(req_list);
- return ret;
+ return I40E_SUCCESS;
}
int
@@ -7204,21 +7195,22 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
uint16_t flags;
int ret = I40E_SUCCESS;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- struct i40e_aqc_remove_macvlan_element_data *req_list;
+ uint8_t aq_buff[I40E_AQ_BUF_SZ] = {0};
+ struct i40e_aqc_remove_macvlan_element_data *req_list =
+ (struct i40e_aqc_remove_macvlan_element_data *)aq_buff;
enum i40e_admin_queue_err aq_status;
if (filter == NULL || total == 0)
return I40E_ERR_PARAM;
- ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
- ele_buff_size = hw->aq.asq_buf_size;
-
- req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0);
- if (req_list == NULL) {
- PMD_DRV_LOG(ERR, "Fail to allocate memory");
+ if (hw->aq.asq_buf_size > I40E_AQ_BUF_SZ) {
+ PMD_DRV_LOG(ERR, "AdminQ size biffer than max");
return I40E_ERR_NO_MEMORY;
}
+ ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
+ ele_buff_size = hw->aq.asq_buf_size;
+
num = 0;
do {
actual_num = (num + ele_num > total) ? (total - num) : ele_num;
@@ -7247,8 +7239,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
break;
default:
PMD_DRV_LOG(ERR, "Invalid MAC filter type");
- ret = I40E_ERR_PARAM;
- goto DONE;
+ return I40E_ERR_PARAM;
}
req_list[i].flags = rte_cpu_to_le_16(flags);
}
@@ -7262,15 +7253,13 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
ret = I40E_SUCCESS;
} else {
PMD_DRV_LOG(ERR, "Failed to remove macvlan filter");
- goto DONE;
+ return ret;
}
}
num += actual_num;
} while (num < total);
-DONE:
- rte_free(req_list);
- return ret;
+ return I40E_SUCCESS;
}
/* Find out specific MAC filter */
@@ -7438,7 +7427,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
else
num = vsi->mac_num * vsi->vlan_num;
- mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0);
+ mv_f = calloc(num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7467,7 +7456,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
ret = i40e_remove_macvlan_filters(vsi, mv_f, num);
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7475,7 +7464,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
int
i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
{
- struct i40e_macvlan_filter *mv_f;
+ struct i40e_macvlan_filter mv_f[I40E_NUM_MACADDR_MAX] = {0};
int mac_num;
int ret = I40E_SUCCESS;
@@ -7492,37 +7481,31 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr");
return I40E_ERR_PARAM;
}
-
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
-
- if (mv_f == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (mac_num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Error! Too many MAC addresses");
+ return I40E_ERR_PARAM;
}
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
i40e_set_vlan_filter(vsi, vlan, 1);
vsi->vlan_num++;
- ret = I40E_SUCCESS;
-DONE:
- rte_free(mv_f);
- return ret;
+ return I40E_SUCCESS;
}
int
i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
{
- struct i40e_macvlan_filter *mv_f;
+ struct i40e_macvlan_filter mv_f[I40E_NUM_MACADDR_MAX] = {0};
int mac_num;
int ret = I40E_SUCCESS;
@@ -7543,42 +7526,36 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr");
return I40E_ERR_PARAM;
}
-
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
-
- if (mv_f == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (mac_num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Error! Too many MAC addresses");
+ return I40E_ERR_PARAM;
}
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_remove_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
/* This is last vlan to remove, replace all mac filter with vlan 0 */
if (vsi->vlan_num == 1) {
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, 0);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
}
i40e_set_vlan_filter(vsi, vlan, 0);
vsi->vlan_num--;
- ret = I40E_SUCCESS;
-DONE:
- rte_free(mv_f);
- return ret;
+ return I40E_SUCCESS;
}
int
@@ -7609,7 +7586,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
mac_filter->filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7648,7 +7625,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7679,7 +7656,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7708,7 +7685,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index a358f68bc5..4839a1d9bf 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -2,6 +2,7 @@
* Copyright(c) 2010-2017 Intel Corporation
*/
+#include <stdlib.h>
#include <eal_export.h>
#include <rte_string_fns.h>
#include <rte_malloc.h>
@@ -233,7 +234,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -250,18 +251,18 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
@@ -294,7 +295,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
f->mac_info.filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -312,18 +313,18 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 12/27] net/i40e: avoid rte malloc in VF resource queries
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (10 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
` (15 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when responding to VF resource queries, we are dynamically
allocating a temporary buffer with rte_zmalloc followed by an immediate
rte_free. This memory does not need to be stored in hugepage memory and
the allocation size is pretty small, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/i40e_pf.c | 26 +++++++-------------------
1 file changed, 7 insertions(+), 19 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_pf.c b/drivers/net/intel/i40e/i40e_pf.c
index ebe1deeade..08cdd6bc4d 100644
--- a/drivers/net/intel/i40e/i40e_pf.c
+++ b/drivers/net/intel/i40e/i40e_pf.c
@@ -309,9 +309,14 @@ static int
i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
bool b_op)
{
- struct virtchnl_vf_resource *vf_res = NULL;
+ /* only have 1 VSI by default */
+ struct {
+ struct virtchnl_vf_resource vf_res;
+ struct virtchnl_vsi_resource vsi_res;
+ } res = {0};
+ struct virtchnl_vf_resource *vf_res = &res.vf_res;
struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
- uint32_t len = 0;
+ uint32_t len = sizeof(res);
uint64_t default_hena = I40E_RSS_HENA_ALL;
int ret = I40E_SUCCESS;
@@ -322,20 +327,6 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
return ret;
}
- /* only have 1 VSI by default */
- len = sizeof(struct virtchnl_vf_resource) +
- I40E_DEFAULT_VF_VSI_NUM *
- sizeof(struct virtchnl_vsi_resource);
-
- vf_res = rte_zmalloc("i40e_vf_res", len, 0);
- if (vf_res == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate mem");
- ret = I40E_ERR_NO_MEMORY;
- vf_res = NULL;
- len = 0;
- goto send_msg;
- }
-
if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
VIRTCHNL_VF_OFFLOAD_VLAN;
@@ -377,11 +368,8 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
rte_ether_addr_copy(&vf->mac_addr,
(struct rte_ether_addr *)vf_res->vsi_res[0].default_mac_addr);
-send_msg:
i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_GET_VF_RESOURCES,
ret, (uint8_t *)vf_res, len);
- rte_free(vf_res);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 13/27] net/i40e: avoid rte malloc in adminq operations
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (11 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
` (14 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing admin queue messages, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 72ff2ea59c..fff832fba1 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6872,14 +6872,11 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_arq_event_info info;
uint16_t pending, opcode;
+ uint8_t msg_buf[I40E_AQ_BUF_SZ] = {0};
int ret;
- info.buf_len = I40E_AQ_BUF_SZ;
- info.msg_buf = rte_zmalloc("msg_buffer", info.buf_len, 0);
- if (!info.msg_buf) {
- PMD_DRV_LOG(ERR, "Failed to allocate mem");
- return;
- }
+ info.buf_len = sizeof(msg_buf);
+ info.msg_buf = msg_buf;
pending = 1;
while (pending) {
@@ -6915,7 +6912,6 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
break;
}
}
- rte_free(info.msg_buf);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 14/27] net/i40e: avoid rte malloc in DDP package handling
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (12 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
` (13 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing Dynamic Driver Profile (DDP) packages and
checking profile information, we are using rte_zmalloc followed by
immediate rte_free. This memory does not need to be stored in hugepage
memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/rte_pmd_i40e.c | 43 +++++----------------------
1 file changed, 8 insertions(+), 35 deletions(-)
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index 4839a1d9bf..7892fa8a4e 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -1557,7 +1557,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
{
struct rte_eth_dev *dev = &rte_eth_devices[port];
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- uint8_t *buff;
+ uint8_t buff[(I40E_MAX_PROFILE_NUM + 4) * I40E_PROFILE_INFO_SIZE] = {0};
struct rte_pmd_i40e_profile_list *p_list;
struct rte_pmd_i40e_profile_info *pinfo, *p;
uint32_t i;
@@ -1570,13 +1570,6 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
PMD_DRV_LOG(INFO, "Read-only profile.");
return 0;
}
- buff = rte_zmalloc("pinfo_list",
- (I40E_PROFILE_INFO_SIZE * I40E_MAX_PROFILE_NUM + 4),
- 0);
- if (!buff) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return -1;
- }
ret = i40e_aq_get_ddp_list(
hw, (void *)buff,
@@ -1584,7 +1577,6 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
0, NULL);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get profile info list.");
- rte_free(buff);
return -1;
}
p_list = (struct rte_pmd_i40e_profile_list *)buff;
@@ -1592,20 +1584,17 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
p = &p_list->p_info[i];
if (pinfo->track_id == p->track_id) {
PMD_DRV_LOG(INFO, "Profile exists.");
- rte_free(buff);
return 1;
}
}
/* profile with group id 0xff is compatible with any other profile */
if ((pinfo->track_id & group_mask) == group_mask) {
- rte_free(buff);
return 0;
}
for (i = 0; i < p_list->p_count; i++) {
p = &p_list->p_info[i];
if ((p->track_id & group_mask) == 0) {
PMD_DRV_LOG(INFO, "Profile of the group 0 exists.");
- rte_free(buff);
return 2;
}
}
@@ -1616,12 +1605,9 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
if ((pinfo->track_id & group_mask) !=
(p->track_id & group_mask)) {
PMD_DRV_LOG(INFO, "Profile of different group exists.");
- rte_free(buff);
return 3;
}
}
-
- rte_free(buff);
return 0;
}
@@ -1637,7 +1623,10 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
struct i40e_generic_seg_header *profile_seg_hdr;
struct i40e_generic_seg_header *metadata_seg_hdr;
uint32_t track_id;
- uint8_t *profile_info_sec;
+ struct {
+ struct i40e_profile_section_header sec;
+ struct i40e_profile_info info;
+ } profile_info_sec = {0};
int is_exist;
enum i40e_status_code status = I40E_SUCCESS;
static const uint32_t type_mask = 0xff000000;
@@ -1702,26 +1691,15 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
return -EINVAL;
}
- profile_info_sec = rte_zmalloc(
- "i40e_profile_info",
- sizeof(struct i40e_profile_section_header) +
- sizeof(struct i40e_profile_info),
- 0);
- if (!profile_info_sec) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- return -EINVAL;
- }
-
/* Check if the profile already loaded */
i40e_generate_profile_info_sec(
((struct i40e_profile_segment *)profile_seg_hdr)->name,
&((struct i40e_profile_segment *)profile_seg_hdr)->version,
- track_id, profile_info_sec,
+ track_id, (uint8_t* )&profile_info_sec,
op == RTE_PMD_I40E_PKG_OP_WR_ADD);
- is_exist = i40e_check_profile_info(port, profile_info_sec);
+ is_exist = i40e_check_profile_info(port, (uint8_t* )&profile_info_sec);
if (is_exist < 0) {
PMD_DRV_LOG(ERR, "Failed to check profile.");
- rte_free(profile_info_sec);
return -EINVAL;
}
@@ -1734,13 +1712,11 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
else if (is_exist == 3)
PMD_DRV_LOG(ERR, "Profile of different group already exists");
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return -EEXIST;
}
} else if (op == RTE_PMD_I40E_PKG_OP_WR_DEL) {
if (is_exist != 1) {
PMD_DRV_LOG(ERR, "Profile does not exist.");
- rte_free(profile_info_sec);
return -EACCES;
}
}
@@ -1752,7 +1728,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
track_id);
if (status) {
PMD_DRV_LOG(ERR, "Failed to write profile for delete.");
- rte_free(profile_info_sec);
return status;
}
} else {
@@ -1765,14 +1740,13 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
PMD_DRV_LOG(ERR, "Failed to write profile for add.");
else
PMD_DRV_LOG(ERR, "Failed to write profile.");
- rte_free(profile_info_sec);
return status;
}
}
if (track_id && (op != RTE_PMD_I40E_PKG_OP_WR_ONLY)) {
/* Modify loaded profiles info list */
- status = i40e_add_rm_profile_info(hw, profile_info_sec);
+ status = i40e_add_rm_profile_info(hw, (uint8_t* )&profile_info_sec);
if (status) {
if (op == RTE_PMD_I40E_PKG_OP_WR_ADD)
PMD_DRV_LOG(ERR, "Failed to add profile to info list.");
@@ -1785,7 +1759,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
op == RTE_PMD_I40E_PKG_OP_WR_DEL)
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return status;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 15/27] net/i40e: avoid rte malloc in DDP ptype handling
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (13 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
` (12 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating customized protocol and packet type information
via DDP packages, we are using rte_zmalloc followed by immediate
rte_free. This memory does not need to be stored in hugepage memory, so
replace it with stack allocation or regular calloc/free as appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 43 ++++++++--------------------
1 file changed, 12 insertions(+), 31 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index fff832fba1..faf35acde0 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -11718,8 +11718,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint32_t pctype_num;
- struct rte_pmd_i40e_ptype_info *pctype;
- uint32_t buff_size;
+ struct rte_pmd_i40e_ptype_info pctype[I40E_CUSTOMIZED_MAX] = {0};
struct i40e_customized_pctype *new_pctype = NULL;
uint8_t proto_id;
uint8_t pctype_value;
@@ -11745,19 +11744,16 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
return -1;
}
- buff_size = pctype_num * sizeof(struct rte_pmd_i40e_proto_info);
- pctype = rte_zmalloc("new_pctype", buff_size, 0);
- if (!pctype) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
+ if (pctype_num > RTE_DIM(pctype)) {
+ PMD_DRV_LOG(ERR, "Pctype number exceeds maximum supported");
return -1;
}
/* get information about new pctype list */
ret = rte_pmd_i40e_get_ddp_info(pkg, pkg_size,
- (uint8_t *)pctype, buff_size,
+ (uint8_t *)pctype, sizeof(pctype),
RTE_PMD_I40E_PKG_INFO_PCTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get pctype list");
- rte_free(pctype);
return -1;
}
@@ -11838,7 +11834,6 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
}
- rte_free(pctype);
return 0;
}
@@ -11848,11 +11843,10 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
struct rte_pmd_i40e_proto_info *proto,
enum rte_pmd_i40e_package_op op)
{
- struct rte_pmd_i40e_ptype_mapping *ptype_mapping;
+ struct rte_pmd_i40e_ptype_mapping ptype_mapping[I40E_MAX_PKT_TYPE] = {0};
uint16_t port_id = dev->data->port_id;
uint32_t ptype_num;
- struct rte_pmd_i40e_ptype_info *ptype;
- uint32_t buff_size;
+ struct rte_pmd_i40e_ptype_info ptype[I40E_MAX_PKT_TYPE] = {0};
uint8_t proto_id;
char name[RTE_PMD_I40E_DDP_NAME_SIZE];
uint32_t i, j, n;
@@ -11883,31 +11877,20 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
return -1;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_info);
- ptype = rte_zmalloc("new_ptype", buff_size, 0);
- if (!ptype) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
+ if (ptype_num > RTE_DIM(ptype)) {
+ PMD_DRV_LOG(ERR, "Too many ptypes");
return -1;
}
/* get information about new ptype list */
ret = rte_pmd_i40e_get_ddp_info(pkg, pkg_size,
- (uint8_t *)ptype, buff_size,
+ (uint8_t *)ptype, sizeof(ptype),
RTE_PMD_I40E_PKG_INFO_PTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get ptype list");
- rte_free(ptype);
return ret;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_mapping);
- ptype_mapping = rte_zmalloc("ptype_mapping", buff_size, 0);
- if (!ptype_mapping) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- rte_free(ptype);
- return -1;
- }
-
/* Update ptype mapping table. */
for (i = 0; i < ptype_num; i++) {
ptype_mapping[i].hw_ptype = ptype[i].ptype_id;
@@ -12042,8 +12025,6 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(ERR, "Failed to update ptype mapping table.");
- rte_free(ptype_mapping);
- rte_free(ptype);
return ret;
}
@@ -12078,7 +12059,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = proto_num * sizeof(struct rte_pmd_i40e_proto_info);
- proto = rte_zmalloc("new_proto", buff_size, 0);
+ proto = calloc(proto_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!proto) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return;
@@ -12090,7 +12071,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PROTOCOL_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get protocol list");
- rte_free(proto);
+ free(proto);
return;
}
@@ -12128,7 +12109,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(INFO, "No ptype is updated.");
- rte_free(proto);
+ free(proto);
}
/* Create a QinQ cloud filter
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 16/27] net/iavf: remove remnants of pipeline mode
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (14 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
@ 2026-02-19 16:22 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 17/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
` (11 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:22 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
When pipeline mode was removed, some of the things used by pipelines were
left in the code. Remove them as they are unused.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
drivers/net/intel/iavf/iavf_hash.c | 1 -
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
5 files changed, 19 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c
index 0ef6e0d04a..9eae874800 100644
--- a/drivers/net/intel/iavf/iavf_fdir.c
+++ b/drivers/net/intel/iavf/iavf_fdir.c
@@ -1632,7 +1632,6 @@ static struct iavf_flow_parser iavf_fdir_parser = {
.array = iavf_fdir_pattern,
.array_len = RTE_DIM(iavf_fdir_pattern),
.parse_pattern_action = iavf_fdir_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fdir_engine_register)
diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c
index cf1030320f..bfb34695de 100644
--- a/drivers/net/intel/iavf/iavf_fsub.c
+++ b/drivers/net/intel/iavf/iavf_fsub.c
@@ -814,7 +814,6 @@ iavf_flow_parser iavf_fsub_parser = {
.array = iavf_fsub_pattern_list,
.array_len = RTE_DIM(iavf_fsub_pattern_list),
.parse_pattern_action = iavf_fsub_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fsub_engine_init)
diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h
index b11bb4cf2b..b97cf8b7ff 100644
--- a/drivers/net/intel/iavf/iavf_generic_flow.h
+++ b/drivers/net/intel/iavf/iavf_generic_flow.h
@@ -485,20 +485,6 @@ enum iavf_flow_engine_type {
IAVF_FLOW_ENGINE_MAX,
};
-/**
- * classification stages.
- * for non-pipeline mode, we have two classification stages: Distributor/RSS
- * for pipeline-mode we have three classification stages:
- * Permission/Distributor/RSS
- */
-enum iavf_flow_classification_stage {
- IAVF_FLOW_STAGE_NONE = 0,
- IAVF_FLOW_STAGE_IPSEC_CRYPTO,
- IAVF_FLOW_STAGE_RSS,
- IAVF_FLOW_STAGE_DISTRIBUTOR,
- IAVF_FLOW_STAGE_MAX,
-};
-
/* Struct to store engine created. */
struct iavf_flow_engine {
TAILQ_ENTRY(iavf_flow_engine) node;
@@ -527,7 +513,6 @@ struct iavf_flow_parser {
struct iavf_pattern_match_item *array;
uint32_t array_len;
parse_pattern_action_t parse_pattern_action;
- enum iavf_flow_classification_stage stage;
};
/* Struct to store parser created. */
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index 1725c2b2b9..a40fed7542 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -691,7 +691,6 @@ static struct iavf_flow_parser iavf_hash_parser = {
.array = iavf_hash_pattern_list,
.array_len = RTE_DIM(iavf_hash_pattern_list),
.parse_pattern_action = iavf_hash_parse_pattern_action,
- .stage = IAVF_FLOW_STAGE_RSS,
};
int
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 6d41b1744e..66eaea8715 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -1983,7 +1983,6 @@ static struct iavf_flow_parser iavf_ipsec_flow_parser = {
.array = iavf_ipsec_flow_pattern,
.array_len = RTE_DIM(iavf_ipsec_flow_pattern),
.parse_pattern_action = iavf_ipsec_flow_parse,
- .stage = IAVF_FLOW_STAGE_IPSEC_CRYPTO,
};
RTE_INIT(iavf_ipsec_flow_engine_register)
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 17/27] net/iavf: decouple hash uninit from parser uninit
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (15 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 18/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
` (10 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, parser deinitialization will trigger removal of current RSS
configuration. This should not be done as part of parser deinitialization,
but should rather be a separate step in dev close flow.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/iavf/iavf.h | 1 +
drivers/net/intel/iavf/iavf_ethdev.c | 3 +++
drivers/net/intel/iavf/iavf_hash.c | 12 ++++++++----
3 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 39949acc11..6054321771 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -566,4 +566,5 @@ void iavf_dev_watchdog_disable(struct iavf_adapter *adapter);
void iavf_handle_hw_reset(struct rte_eth_dev *dev, bool vf_initiated_reset);
void iavf_set_no_poll(struct iavf_adapter *adapter, bool link_change);
bool is_iavf_supported(struct rte_eth_dev *dev);
+void iavf_hash_uninit(struct iavf_adapter *ad);
#endif /* _IAVF_ETHDEV_H_ */
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 802e095174..e978284bf2 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -2972,6 +2972,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
/* free iAVF security device context all related resources */
iavf_security_ctx_destroy(adapter);
+ /* remove RSS configuration */
+ iavf_hash_uninit(adapter);
+
iavf_flow_flush(dev, NULL);
iavf_flow_uninit(adapter);
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index a40fed7542..cb10eeab78 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -77,7 +77,7 @@ static int
iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
struct rte_flow_error *error);
static void
-iavf_hash_uninit(struct iavf_adapter *ad);
+iavf_hash_uninit_parser(struct iavf_adapter *ad);
static void
iavf_hash_free(struct rte_flow *flow);
static int
@@ -680,7 +680,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
.init = iavf_hash_init,
.create = iavf_hash_create,
.destroy = iavf_hash_destroy,
- .uninit = iavf_hash_uninit,
+ .uninit = iavf_hash_uninit_parser,
.free = iavf_hash_free,
.type = IAVF_FLOW_ENGINE_HASH,
};
@@ -1641,6 +1641,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
}
static void
+iavf_hash_uninit_parser(struct iavf_adapter *ad)
+{
+ iavf_unregister_parser(&iavf_hash_parser, ad);
+}
+
+void
iavf_hash_uninit(struct iavf_adapter *ad)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
@@ -1658,8 +1664,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
PMD_DRV_LOG(ERR, "fail to delete default RSS");
-
- iavf_unregister_parser(&iavf_hash_parser, ad);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 18/27] net/iavf: avoid rte malloc in VF mailbox for IPsec
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (16 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 17/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
` (9 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when calling down into the VF mailbox, IPsec code will use
rte_malloc to allocate VF message structures. This memory does not need
to be stored in hugepage memory and the allocation size is pretty small,
so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 160 +++++++--------------
1 file changed, 54 insertions(+), 106 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 66eaea8715..aa4cf1d96d 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -458,36 +458,25 @@ static uint32_t
iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
struct rte_security_session_conf *conf)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- struct virtchnl_ipsec_sa_cfg *sa_cfg;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg sa_cfg;
+ } sa_req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp;
+ } sa_resp = {0};
+ struct inline_ipsec_msg *request = &sa_req.msg;
+ struct inline_ipsec_msg *response = &sa_resp.msg;
+ struct virtchnl_ipsec_sa_cfg *sa_cfg = &sa_req.sa_cfg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg);
-
- request = rte_malloc("iavf-sad-add-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg_resp);
- response = rte_malloc("iavf-sad-add-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
/* set SA configuration params */
- sa_cfg = (struct virtchnl_ipsec_sa_cfg *)(request + 1);
-
sa_cfg->spi = conf->ipsec.spi;
sa_cfg->virtchnl_protocol_type = VIRTCHNL_PROTO_ESP;
sa_cfg->virtchnl_direction =
@@ -541,10 +530,10 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sa_req),
+ (uint8_t *)response, sizeof(sa_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -552,9 +541,6 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
rc = -EFAULT;
else
rc = response->ipsec_data.sa_cfg_resp->sa_handle;
-update_cleanup:
- rte_free(response);
- rte_free(request);
return rc;
}
@@ -722,19 +708,18 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
bool is_udp,
uint16_t udp_port)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg sp_cfg;
+ } sp_req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg_resp sp_cfg_resp;
+ } sp_resp = {0};
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg);
- request = rte_malloc("iavf-inbound-security-policy-add-request",
- request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -768,21 +753,12 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
request->ipsec_data.sp_cfg->is_udp = is_udp;
request->ipsec_data.sp_cfg->udp_port = htons(udp_port);
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg_resp);
- response = rte_malloc("iavf-inbound-security-policy-add-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -791,10 +767,6 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.sp_cfg_resp->rule_id;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -802,26 +774,18 @@ static uint32_t
iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_update sa_update;
+ } sp_req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp ipsec_resp;
+ } sp_resp = {0};
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_update);
- request = rte_malloc("iavf-sa-update-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-update-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_UPDATE;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -833,10 +797,10 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -845,10 +809,6 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.ipsec_resp->resp;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -899,26 +859,18 @@ int
iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
uint8_t is_v4, uint32_t flow_id)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_destroy sp_destroy;
+ } sp_req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } sp_resp = {0};
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_destroy);
- request = rte_malloc("iavf-sp-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sp-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -931,21 +883,17 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
response->req_id != request->req_id)
rc = -EFAULT;
else
- return response->ipsec_data.ipsec_status->status;
-
-update_cleanup:
- rte_free(request);
- rte_free(response);
+ rc = response->ipsec_data.ipsec_status->status;
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 19/27] net/iavf: avoid rte malloc in RSS configuration
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (17 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 18/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
` (8 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring RSS (redirection table, lookup table, and
hash key), we are using rte_zmalloc followed by an immediate rte_free.
This memory does not need to be stored in hugepage memory, and in context
of IAVF we do not define how big these structures can be, so replace the
allocations with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++--
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index e978284bf2..f74a0cb21f 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1553,7 +1553,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1573,7 +1573,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = iavf_configure_rss_lut(adapter);
if (ret) /* revert back */
rte_memcpy(vf->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 9ad39300c6..55986ef909 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1159,7 +1159,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
- rss_lut = rte_zmalloc("rss_lut", len, 0);
+ rss_lut = calloc(1, len);
if (!rss_lut)
return -ENOMEM;
@@ -1178,7 +1178,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_LUT");
- rte_free(rss_lut);
+ free(rss_lut);
return err;
}
@@ -1191,7 +1191,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
- rss_key = rte_zmalloc("rss_key", len, 0);
+ rss_key = calloc(1, len);
if (!rss_key)
return -ENOMEM;
@@ -1210,7 +1210,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_KEY");
- rte_free(rss_key);
+ free(rss_key);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (18 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
` (7 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This memory does not need
to be stored in hugepage memory, so replace it with stack allocation.
The original code also had a loop attempting to split up sending MAC
address list into multiple virtchnl messages. However, maximum number of
MAC addresses is 64, sizeof() each MAC address is 8 bytes, and maximum
virtchnl message size is 4K, so splitting it up is actually unnecessary.
This loop has been removed.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 82 ++++++++++-------------------
1 file changed, 29 insertions(+), 53 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 55986ef909..f44dc7e7be 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1380,63 +1380,39 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
void
iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
{
- struct virtchnl_ether_addr_list *list;
+ struct {
+ struct virtchnl_ether_addr_list list;
+ struct virtchnl_ether_addr addr[IAVF_NUM_MACADDR_MAX];
+ } list_req = {0};
+ struct virtchnl_ether_addr_list *list = &list_req.list;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct rte_ether_addr *addr;
- struct iavf_cmd_info args;
- int len, err, i, j;
- int next_begin = 0;
- int begin = 0;
+ struct iavf_cmd_info args = {0};
+ int err, i;
- do {
- j = 0;
- len = sizeof(struct virtchnl_ether_addr_list);
- for (i = begin; i < IAVF_NUM_MACADDR_MAX; i++, next_begin++) {
- addr = &adapter->dev_data->mac_addrs[i];
- if (rte_is_zero_ether_addr(addr))
- continue;
- len += sizeof(struct virtchnl_ether_addr);
- if (len >= IAVF_AQ_BUF_SZ) {
- next_begin = i + 1;
- break;
- }
- }
+ for (i = 0; i < IAVF_NUM_MACADDR_MAX; i++) {
+ struct rte_ether_addr *addr = &adapter->dev_data->mac_addrs[i];
+ struct virtchnl_ether_addr *vc_addr = &list->list[list->num_elements];
- list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
- if (!list) {
- PMD_DRV_LOG(ERR, "fail to allocate memory");
- return;
- }
+ /* ignore empty addresses */
+ if (rte_is_zero_ether_addr(addr))
+ continue;
+ list->num_elements++;
- for (i = begin; i < next_begin; i++) {
- addr = &adapter->dev_data->mac_addrs[i];
- if (rte_is_zero_ether_addr(addr))
- continue;
- rte_memcpy(list->list[j].addr, addr->addr_bytes,
- sizeof(addr->addr_bytes));
- list->list[j].type = (j == 0 ?
- VIRTCHNL_ETHER_ADDR_PRIMARY :
- VIRTCHNL_ETHER_ADDR_EXTRA);
- PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
- RTE_ETHER_ADDR_BYTES(addr));
- j++;
- }
- list->vsi_id = vf->vsi_res->vsi_id;
- list->num_elements = j;
- args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
- VIRTCHNL_OP_DEL_ETH_ADDR;
- args.in_args = (uint8_t *)list;
- args.in_args_size = len;
- args.out_buffer = vf->aq_resp;
- args.out_size = IAVF_AQ_BUF_SZ;
- err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
- if (err)
- PMD_DRV_LOG(ERR, "fail to execute command %s",
- add ? "OP_ADD_ETHER_ADDRESS" :
- "OP_DEL_ETHER_ADDRESS");
- rte_free(list);
- begin = next_begin;
- } while (begin < IAVF_NUM_MACADDR_MAX);
+ memcpy(vc_addr->addr, addr->addr_bytes, sizeof(addr->addr_bytes));
+ vc_addr->type = (list->num_elements == 1) ?
+ VIRTCHNL_ETHER_ADDR_PRIMARY :
+ VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+ list->vsi_id = vf->vsi_res->vsi_id;
+ args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.in_args = (uint8_t *)list;
+ args.in_args_size = sizeof(list_req);
+ args.out_buffer = vf->aq_resp;
+ args.out_size = IAVF_AQ_BUF_SZ;
+ err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" : "OP_DEL_ETHER_ADDRESS");
}
int
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 21/27] net/iavf: avoid rte malloc in IPsec operations
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (19 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
` (6 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when performing IPsec security association operations and
retrieving device capabilities, we are using rte_malloc followed by
immediate rte_free. This memory does not need to be stored in hugepage
memory and the allocation size can be pretty small, so replace with stack
allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 122 +++++++--------------
1 file changed, 38 insertions(+), 84 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index aa4cf1d96d..98aa8b115b 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -3,6 +3,7 @@
*/
#include <stdalign.h>
+#include <stdlib.h>
#include <rte_cryptodev.h>
#include <rte_ethdev.h>
@@ -902,29 +903,18 @@ static uint32_t
iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_destroy sa_destroy;
+ } req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } resp = {0};
+ struct inline_ipsec_msg *request = &req.msg, *response = &resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_destroy);
-
- request = rte_malloc("iavf-sa-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
-
- response = rte_malloc("iavf-sa-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_DESTROY;
request->req_id = (uint16_t)0xDEADBEEF;
@@ -942,10 +932,10 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(req),
+ (uint8_t *)response, sizeof(resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -960,10 +950,6 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
response->ipsec_data.ipsec_status->status)
rc = -EFAULT;
-update_cleanup:
- rte_free(response);
- rte_free(request);
-
return rc;
}
@@ -1109,51 +1095,35 @@ static int
iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
struct virtchnl_ipsec_cap *capability)
{
+ struct {
+ struct inline_ipsec_msg msg;
+ } req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_cap cap;
+ } resp = {0};
/* Perform pf-vf comms */
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct inline_ipsec_msg *request = &req.msg, *response = &resp.msg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg);
-
- request = rte_malloc("iavf-device-capability-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-capability-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_GET_CAP;
request->req_id = (uint16_t)0xDEADBEEF;
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(req),
+ (uint8_t *)response, sizeof(resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
response->req_id != request->req_id){
- rc = -EFAULT;
- goto update_cleanup;
+ return -EFAULT;
}
memcpy(capability, response->ipsec_data.ipsec_cap, sizeof(*capability));
-update_cleanup:
- rte_free(response);
- rte_free(request);
-
return rc;
}
@@ -1535,50 +1505,34 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
struct virtchnl_ipsec_status *status)
{
/* Perform pf-vf comms */
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ } req = {0};
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_cap cap;
+ } resp = {0};
+ struct inline_ipsec_msg *request = &req.msg, *response = &resp.msg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg);
-
- request = rte_malloc("iavf-device-status-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-status-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_GET_STATUS;
request->req_id = (uint16_t)0xDEADBEEF;
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(req),
+ (uint8_t *)response, sizeof(resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
- response->req_id != request->req_id){
- rc = -EFAULT;
- goto update_cleanup;
+ response->req_id != request->req_id){
+ return -EFAULT;
}
memcpy(status, response->ipsec_data.ipsec_status, sizeof(*status));
-update_cleanup:
- rte_free(response);
- rte_free(request);
-
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 22/27] net/iavf: avoid rte malloc in queue operations
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (20 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
` (5 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when enabling, disabling, or switching queues, we are using
rte_malloc followed by an immediate rte_free. This is not needed as these
structures are not being stored anywhere, so replace them with stack
allocation.
The original code did not check maximum queue number, because the design
was built around an anti-pattern of caller having to chunk the queue
configuration. This has now been corrected as well.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf.h | 3 +-
drivers/net/intel/iavf/iavf_ethdev.c | 15 +-
drivers/net/intel/iavf/iavf_vchnl.c | 212 ++++++++++++++-------------
3 files changed, 115 insertions(+), 115 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 6054321771..77a2c94290 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -503,8 +503,7 @@ int iavf_disable_queues(struct iavf_adapter *adapter);
int iavf_disable_queues_lv(struct iavf_adapter *adapter);
int iavf_configure_rss_lut(struct iavf_adapter *adapter);
int iavf_configure_rss_key(struct iavf_adapter *adapter);
-int iavf_configure_queues(struct iavf_adapter *adapter,
- uint16_t num_queue_pairs, uint16_t index);
+int iavf_configure_queues(struct iavf_adapter *adapter, uint16_t num_queue_pairs);
int iavf_get_supported_rxdid(struct iavf_adapter *adapter);
int iavf_config_vlan_strip_v2(struct iavf_adapter *adapter, bool enable);
int iavf_config_vlan_insert_v2(struct iavf_adapter *adapter, bool enable);
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index f74a0cb21f..f7bfd099ed 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1036,20 +1036,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
if (iavf_set_vf_quanta_size(adapter, index, num_queue_pairs) != 0)
PMD_DRV_LOG(WARNING, "configure quanta size failed");
- /* If needed, send configure queues msg multiple times to make the
- * adminq buffer length smaller than the 4K limitation.
- */
- while (num_queue_pairs > IAVF_CFG_Q_NUM_PER_BUF) {
- if (iavf_configure_queues(adapter,
- IAVF_CFG_Q_NUM_PER_BUF, index) != 0) {
- PMD_DRV_LOG(ERR, "configure queues failed");
- goto error;
- }
- num_queue_pairs -= IAVF_CFG_Q_NUM_PER_BUF;
- index += IAVF_CFG_Q_NUM_PER_BUF;
- }
-
- if (iavf_configure_queues(adapter, num_queue_pairs, index) != 0) {
+ if (iavf_configure_queues(adapter, num_queue_pairs) != 0) {
PMD_DRV_LOG(ERR, "configure queues failed");
goto error;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index f44dc7e7be..f0ab3b950b 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1020,19 +1020,15 @@ int
iavf_enable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1048,7 +1044,7 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_ENABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1056,7 +1052,6 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_ENABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1064,19 +1059,15 @@ int
iavf_disable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1092,7 +1083,7 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1100,7 +1091,6 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1109,17 +1099,14 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
bool rx, bool on)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = 1;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1138,7 +1125,7 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
else
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1146,7 +1133,6 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
PMD_DRV_LOG(ERR, "Failed to execute command of %s",
on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1214,88 +1200,116 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
return err;
}
-int
-iavf_configure_queues(struct iavf_adapter *adapter,
- uint16_t num_queue_pairs, uint16_t index)
+static void
+iavf_configure_queue_pair(struct iavf_adapter *adapter,
+ struct virtchnl_queue_pair_info *vc_qp,
+ uint16_t q_idx)
{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
struct ci_rx_queue **rxq = (struct ci_rx_queue **)adapter->dev_data->rx_queues;
struct ci_tx_queue **txq = (struct ci_tx_queue **)adapter->dev_data->tx_queues;
+
+ /* common parts */
+ vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+ vc_qp->txq.queue_id = q_idx;
+
+ vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+ vc_qp->rxq.queue_id = q_idx;
+ vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+
+ /* is this txq active? */
+ if (q_idx < adapter->dev_data->nb_tx_queues) {
+ vc_qp->txq.ring_len = txq[q_idx]->nb_tx_desc;
+ vc_qp->txq.dma_ring_addr = txq[q_idx]->tx_ring_dma;
+ }
+
+ /* is this rxq active? */
+ if (q_idx >= adapter->dev_data->nb_rx_queues)
+ return;
+
+ vc_qp->rxq.ring_len = rxq[q_idx]->nb_rx_desc;
+ vc_qp->rxq.dma_ring_addr = rxq[q_idx]->rx_ring_phys_addr;
+ vc_qp->rxq.databuffer_size = rxq[q_idx]->rx_buf_len;
+ vc_qp->rxq.crc_disable = rxq[q_idx]->crc_len != 0 ? 1 : 0;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ if (vf->supported_rxdid & RTE_BIT64(rxq[q_idx]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[q_idx]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, q_idx);
+ } else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[q_idx]->rxdid, IAVF_RXDID_LEGACY_1, q_idx);
+ vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
+ }
+
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP &&
+ vf->ptp_caps & VIRTCHNL_1588_PTP_CAP_RX_TSTAMP &&
+ rxq[q_idx]->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ vc_qp->rxq.flags |= VIRTCHNL_PTP_RX_TSTAMP;
+ }
+}
+
+static int
+iavf_configure_queue_chunk(struct iavf_adapter *adapter,
+ uint16_t chunk_sz,
+ uint16_t chunk_start)
+{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_vsi_queue_config_info *vc_config;
- struct virtchnl_queue_pair_info *vc_qp;
- struct iavf_cmd_info args;
- uint16_t i, size;
+ struct {
+ struct virtchnl_vsi_queue_config_info config;
+ struct virtchnl_queue_pair_info qp[IAVF_CFG_Q_NUM_PER_BUF];
+ } queue_req = {0};
+ struct iavf_cmd_info args = {0};
+ struct virtchnl_vsi_queue_config_info *vc_config = &queue_req.config;
+ struct virtchnl_queue_pair_info *vc_qp = vc_config->qpair;
+ uint16_t chunk_end = chunk_start + chunk_sz;
+ uint16_t i;
int err;
- size = sizeof(*vc_config) +
- sizeof(vc_config->qpair[0]) * num_queue_pairs;
- vc_config = rte_zmalloc("cfg_queue", size, 0);
- if (!vc_config)
- return -ENOMEM;
+ if (chunk_sz > IAVF_CFG_Q_NUM_PER_BUF)
+ return -EINVAL;
vc_config->vsi_id = vf->vsi_res->vsi_id;
- vc_config->num_queue_pairs = num_queue_pairs;
+ vc_config->num_queue_pairs = chunk_sz;
- for (i = index, vc_qp = vc_config->qpair;
- i < index + num_queue_pairs;
- i++, vc_qp++) {
- vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
- vc_qp->txq.queue_id = i;
+ for (i = chunk_start; i < chunk_end; i++, vc_qp++)
+ iavf_configure_queue_pair(adapter, vc_qp, i);
- /* Virtchnnl configure tx queues by pairs */
- if (i < adapter->dev_data->nb_tx_queues) {
- vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
- vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
- }
-
- vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
- vc_qp->rxq.queue_id = i;
- vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
-
- if (i >= adapter->dev_data->nb_rx_queues)
- continue;
-
- /* Virtchnnl configure rx queues by pairs */
- vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
- vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
- vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
- vc_qp->rxq.crc_disable = rxq[i]->crc_len != 0 ? 1 : 0;
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
- if (vf->supported_rxdid & RTE_BIT64(rxq[i]->rxdid)) {
- vc_qp->rxq.rxdid = rxq[i]->rxdid;
- PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
- vc_qp->rxq.rxdid, i);
- } else {
- PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
- "request default RXDID[%d] in Queue[%d]",
- rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
- vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- }
-
- if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP &&
- vf->ptp_caps & VIRTCHNL_1588_PTP_CAP_RX_TSTAMP &&
- rxq[i]->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
- vc_qp->rxq.flags |= VIRTCHNL_PTP_RX_TSTAMP;
- }
- }
-
- memset(&args, 0, sizeof(args));
args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
args.in_args = (uint8_t *)vc_config;
- args.in_args_size = size;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
if (err)
- PMD_DRV_LOG(ERR, "Failed to execute command of"
- " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
-
- rte_free(vc_config);
+ PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL_OP_CONFIG_VSI_QUEUES");
return err;
}
+int
+iavf_configure_queues(struct iavf_adapter *adapter, uint16_t num_queue_pairs)
+{
+ uint16_t c;
+ int err;
+
+ /*
+ * we cannot configure all queues in one go because they won't fit into
+ * adminq buffer, so we're going to chunk them instead
+ */
+ for (c = 0; c < num_queue_pairs; c += IAVF_CFG_Q_NUM_PER_BUF) {
+ uint16_t chunk_sz = RTE_MIN(num_queue_pairs - c, IAVF_CFG_Q_NUM_PER_BUF);
+ err = iavf_configure_queue_chunk(adapter, chunk_sz, c);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to configure queues chunk [%u, %u)",
+ c, c + chunk_sz);
+ return err;
+ }
+ }
+ return 0;
+}
+
int
iavf_config_irq_map(struct iavf_adapter *adapter)
{
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 23/27] net/iavf: avoid rte malloc in irq map config
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (21 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (4 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring IRQ maps, we are using rte_zmalloc followed
by an immediate rte_free. This memory does not need to be stored in
hugepage memory, so replace it with stack allocation.
The original code did not check maximum IRQ map size, because the design
was built around an anti-pattern of caller having to chunk the IRQ map
calls. This has now been corrected as well.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf.h | 3 +-
drivers/net/intel/iavf/iavf_ethdev.c | 15 +---
drivers/net/intel/iavf/iavf_vchnl.c | 110 +++++++++++++++++----------
3 files changed, 72 insertions(+), 56 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 77a2c94290..f9bb398a77 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -511,8 +511,7 @@ int iavf_add_del_vlan_v2(struct iavf_adapter *adapter, uint16_t vlanid,
bool add);
int iavf_get_vlan_offload_caps_v2(struct iavf_adapter *adapter);
int iavf_config_irq_map(struct iavf_adapter *adapter);
-int iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
- uint16_t index);
+int iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num);
void iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add);
int iavf_dev_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index f7bfd099ed..89826547f0 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -919,20 +919,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
goto config_irq_map_err;
}
} else {
- uint16_t num_qv_maps = dev->data->nb_rx_queues;
- uint16_t index = 0;
-
- while (num_qv_maps > IAVF_IRQ_MAP_NUM_PER_BUF) {
- if (iavf_config_irq_map_lv(adapter,
- IAVF_IRQ_MAP_NUM_PER_BUF, index)) {
- PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
- goto config_irq_map_err;
- }
- num_qv_maps -= IAVF_IRQ_MAP_NUM_PER_BUF;
- index += IAVF_IRQ_MAP_NUM_PER_BUF;
- }
-
- if (iavf_config_irq_map_lv(adapter, num_qv_maps, index)) {
+ if (iavf_config_irq_map_lv(adapter, dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
goto config_irq_map_err;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index f0ab3b950b..baadb4b686 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1314,81 +1314,111 @@ int
iavf_config_irq_map(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_irq_map_info *map_info;
- struct virtchnl_vector_map *vecmap;
- struct iavf_cmd_info args;
- int len, i, err;
+ struct {
+ struct virtchnl_irq_map_info map_info;
+ struct virtchnl_vector_map vecmap[IAVF_MAX_NUM_QUEUES_DFLT];
+ } map_req = {0};
+ struct virtchnl_irq_map_info *map_info = &map_req.map_info;
+ struct iavf_cmd_info args = {0};
+ int i, err, max_vmi = -1;
- len = sizeof(struct virtchnl_irq_map_info) +
- sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+ if (adapter->dev_data->nb_rx_queues > IAVF_MAX_NUM_QUEUES_DFLT) {
+ PMD_DRV_LOG(ERR, "number of queues (%u) exceeds the max supported (%u)",
+ adapter->dev_data->nb_rx_queues, IAVF_MAX_NUM_QUEUES_DFLT);
+ return -EINVAL;
+ }
- map_info = rte_zmalloc("map_info", len, 0);
- if (!map_info)
- return -ENOMEM;
-
- map_info->num_vectors = vf->nb_msix;
for (i = 0; i < adapter->dev_data->nb_rx_queues; i++) {
- vecmap =
- &map_info->vecmap[vf->qv_map[i].vector_id - vf->msix_base];
+ struct virtchnl_vector_map *vecmap;
+ /* always 0 for 1 MSIX, never bigger than rxq for multi MSIX */
+ uint16_t vmi = vf->qv_map[i].vector_id - vf->msix_base;
+
+ /* can't happen but avoid static analysis warnings */
+ if (vmi >= IAVF_MAX_NUM_QUEUES_DFLT) {
+ PMD_DRV_LOG(ERR, "vector id (%u) exceeds the max supported (%u)",
+ vf->qv_map[i].vector_id,
+ vf->msix_base + IAVF_MAX_NUM_QUEUES_DFLT - 1);
+ return -EINVAL;
+ }
+
+ vecmap = &map_info->vecmap[vmi];
vecmap->vsi_id = vf->vsi_res->vsi_id;
vecmap->rxitr_idx = IAVF_ITR_INDEX_DEFAULT;
vecmap->vector_id = vf->qv_map[i].vector_id;
vecmap->txq_map = 0;
vecmap->rxq_map |= 1 << vf->qv_map[i].queue_id;
+
+ /* MSIX vectors round robin so look for max */
+ if (vmi > max_vmi) {
+ map_info->num_vectors++;
+ max_vmi = vmi;
+ }
}
args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
args.in_args = (u8 *)map_info;
- args.in_args_size = len;
+ args.in_args_size = sizeof(map_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
- rte_free(map_info);
return err;
}
-int
-iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
- uint16_t index)
+static int
+iavf_config_irq_map_lv_chunk(struct iavf_adapter *adapter, uint16_t chunk_sz, uint16_t chunk_start)
{
+ struct {
+ struct virtchnl_queue_vector_maps map_info;
+ struct virtchnl_queue_vector qv_maps[IAVF_CFG_Q_NUM_PER_BUF];
+ } chunk_req = {0};
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_queue_vector_maps *map_info;
- struct virtchnl_queue_vector *qv_maps;
- struct iavf_cmd_info args;
- int len, i, err;
- int count = 0;
+ struct iavf_cmd_info args = {0};
+ struct virtchnl_queue_vector_maps *map_info = &chunk_req.map_info;
+ struct virtchnl_queue_vector *qv_maps = chunk_req.qv_maps;
+ uint16_t chunk_end = chunk_start + chunk_sz;
+ uint16_t i;
- len = sizeof(struct virtchnl_queue_vector_maps) +
- sizeof(struct virtchnl_queue_vector) * (num - 1);
-
- map_info = rte_zmalloc("map_info", len, 0);
- if (!map_info)
- return -ENOMEM;
+ if (chunk_sz > IAVF_CFG_Q_NUM_PER_BUF)
+ return -EINVAL;
map_info->vport_id = vf->vsi_res->vsi_id;
- map_info->num_qv_maps = num;
- for (i = index; i < index + map_info->num_qv_maps; i++) {
- qv_maps = &map_info->qv_maps[count++];
+ map_info->num_qv_maps = chunk_sz;
+ for (i = chunk_start; i < chunk_end; i++) {
+ qv_maps = &map_info->qv_maps[i];
qv_maps->itr_idx = VIRTCHNL_ITR_IDX_0;
qv_maps->queue_type = VIRTCHNL_QUEUE_TYPE_RX;
- qv_maps->queue_id = vf->qv_map[i].queue_id;
- qv_maps->vector_id = vf->qv_map[i].vector_id;
+ qv_maps->queue_id = vf->qv_map[chunk_start + i].queue_id;
+ qv_maps->vector_id = vf->qv_map[chunk_start + i].vector_id;
}
args.ops = VIRTCHNL_OP_MAP_QUEUE_VECTOR;
args.in_args = (u8 *)map_info;
- args.in_args_size = len;
+ args.in_args_size = sizeof(chunk_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
- err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
- if (err)
- PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
- rte_free(map_info);
- return err;
+ return iavf_execute_vf_cmd_safe(adapter, &args, 0);
+}
+
+int
+iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num)
+{
+ uint16_t c;
+ int err;
+
+ for (c = 0; c < num; c += IAVF_CFG_Q_NUM_PER_BUF) {
+ uint16_t chunk_sz = RTE_MIN(num - c, IAVF_CFG_Q_NUM_PER_BUF);
+ err = iavf_config_irq_map_lv_chunk(adapter, chunk_sz, c);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to configure irq map chunk [%u, %u)",
+ c, c + chunk_sz);
+ return err;
+ }
+ }
+ return 0;
}
void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 24/27] net/ice: avoid rte malloc in RSS RETA operations
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (22 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
` (3 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This memory does
not need to be stored in hugepage memory, so replace it with stack
allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
drivers/net/intel/ice/ice_ethdev.c | 29 ++++++--------------------
2 files changed, 8 insertions(+), 25 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 81da5a4656..037382b336 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -1338,7 +1338,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1358,7 +1358,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = ice_dcf_configure_rss_lut(hw);
if (ret) /* revert back */
rte_memcpy(hw->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_ethdev.c b/drivers/net/intel/ice/ice_ethdev.c
index ade13600de..b9d8b2a3ac 100644
--- a/drivers/net/intel/ice/ice_ethdev.c
+++ b/drivers/net/intel/ice/ice_ethdev.c
@@ -5566,7 +5566,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K] = {0};
int ret;
if (reta_size != ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128 &&
@@ -5583,14 +5583,9 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
/* It MUST use the current LUT size to get the RSS lookup table,
* otherwise if will fail with -100 error code.
*/
- lut = rte_zmalloc(NULL, RTE_MAX(reta_size, lut_size), 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
ret = ice_get_rss_lut(pf->main_vsi, lut, lut_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
@@ -5606,10 +5601,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
pf->hash_lut_size = reta_size;
}
-out:
- rte_free(lut);
-
- return ret;
+ return 0;
}
static int
@@ -5620,7 +5612,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K] = {0};
int ret;
if (reta_size != lut_size) {
@@ -5632,15 +5624,9 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc(NULL, reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
-
ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
@@ -5649,10 +5635,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
reta_conf[idx].reta[shift] = lut[i];
}
-out:
- rte_free(lut);
-
- return ret;
+ return 0;
}
static int
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 25/27] net/ice: avoid rte malloc in MAC address operations
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (23 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
` (2 subsequent siblings)
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This memory does not need
to be stored in hugepage memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 037382b336..332f1e356e 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -928,19 +928,14 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
struct rte_ether_addr *mc_addrs,
uint32_t mc_addrs_num, bool add)
{
- struct virtchnl_ether_addr_list *list;
- struct dcf_virtchnl_cmd args;
+ struct {
+ struct virtchnl_ether_addr_list list;
+ struct virtchnl_ether_addr addr[DCF_NUM_MACADDR_MAX];
+ } list_req = {0};
+ struct virtchnl_ether_addr_list *list = &list_req.list;
+ struct dcf_virtchnl_cmd args = {0};
uint32_t i;
- int len, err = 0;
-
- len = sizeof(struct virtchnl_ether_addr_list);
- len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
-
- list = rte_zmalloc(NULL, len, 0);
- if (!list) {
- PMD_DRV_LOG(ERR, "fail to allocate memory");
- return -ENOMEM;
- }
+ int err = 0;
for (i = 0; i < mc_addrs_num; i++) {
memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
@@ -955,13 +950,12 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
VIRTCHNL_OP_DEL_ETH_ADDR;
args.req_msg = (uint8_t *)list;
- args.req_msglen = len;
+ args.req_msglen = sizeof(list_req);
err = ice_dcf_execute_virtchnl_cmd(hw, &args);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 26/27] net/ice: avoid rte malloc in raw pattern parsing
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (24 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
2026-02-19 19:08 ` [PATCH v6 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Stephen Hemminger
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when parsing raw flow patterns, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_fdir_filter.c | 14 +++++++-------
drivers/net/intel/ice/ice_hash.c | 10 +++++-----
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index da22b65a77..5f44b5c818 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -1879,13 +1879,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
pkt_len)
return -rte_errno;
- tmp_spec = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_spec = calloc(1, pkt_len / 2);
if (!tmp_spec)
return -rte_errno;
- tmp_mask = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_mask = calloc(1, pkt_len / 2);
if (!tmp_mask) {
- rte_free(tmp_spec);
+ free(tmp_spec);
return -rte_errno;
}
@@ -1950,13 +1950,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
filter->parser_ena = true;
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
break;
raw_error:
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
return ret_val;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index afdc8f220a..854c6e8dca 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -676,13 +676,13 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
pkt_len = spec_len / 2;
- pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+ pkt_buf = calloc(1, pkt_len);
if (!pkt_buf)
return -ENOMEM;
- msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+ msk_buf = calloc(1, pkt_len);
if (!msk_buf) {
- rte_free(pkt_buf);
+ free(pkt_buf);
return -ENOMEM;
}
@@ -733,8 +733,8 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
rte_memcpy(&meta->raw.prof, &prof, sizeof(prof));
free_mem:
- rte_free(pkt_buf);
- rte_free(msk_buf);
+ free(pkt_buf);
+ free(msk_buf);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v6 27/27] net/ice: avoid rte malloc in flow pattern match
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (25 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
@ 2026-02-19 16:23 ` Anatoly Burakov
2026-02-19 19:08 ` [PATCH v6 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Stephen Hemminger
27 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-19 16:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when allocating buffers for pattern match items and flow item
storage, we are using rte_zmalloc followed by immediate rte_free. This
memory does not need to be stored in hugepage memory, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_acl_filter.c | 3 ++-
drivers/net/intel/ice/ice_fdir_filter.c | 5 +++--
drivers/net/intel/ice/ice_generic_flow.c | 15 +++++++--------
drivers/net/intel/ice/ice_hash.c | 3 ++-
drivers/net/intel/ice/ice_switch_filter.c | 5 +++--
5 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c
index 38e30a4f62..6754a40044 100644
--- a/drivers/net/intel/ice/ice_acl_filter.c
+++ b/drivers/net/intel/ice/ice_acl_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1009,7 +1010,7 @@ ice_acl_parse(struct ice_adapter *ad,
*meta = filter;
error:
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 5f44b5c818..8cca831fa9 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -3,6 +3,7 @@
*/
#include <stdio.h>
+#include <stdlib.h>
#include <rte_flow.h>
#include <rte_hash.h>
#include <rte_hash_crc.h>
@@ -2504,11 +2505,11 @@ ice_fdir_parse(struct ice_adapter *ad,
rte_free(filter->pkt_buf);
}
- rte_free(item);
+ free(item);
return ret;
error:
rte_free(filter->pkt_buf);
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c
index 4049157eab..3f7a9f4714 100644
--- a/drivers/net/intel/ice/ice_generic_flow.c
+++ b/drivers/net/intel/ice/ice_generic_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -2136,19 +2137,17 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
}
item_num++;
- items = rte_zmalloc("ice_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
+ items = calloc(item_num, sizeof(struct rte_flow_item));
if (!items) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
NULL, "No memory for PMD internal items.");
return NULL;
}
- pattern_match_item = rte_zmalloc("ice_pattern_match_item",
- sizeof(struct ice_pattern_match_item), 0);
+ pattern_match_item = calloc(1, sizeof(struct ice_pattern_match_item));
if (!pattern_match_item) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE,
NULL, "Failed to allocate memory.");
- rte_free(items);
+ free(items);
return NULL;
}
@@ -2167,7 +2166,7 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
pattern_match_item->pattern_list =
array[i].pattern_list;
pattern_match_item->meta = array[i].meta;
- rte_free(items);
+ free(items);
return pattern_match_item;
}
}
@@ -2175,8 +2174,8 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
unsupported:
rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- rte_free(items);
- rte_free(pattern_match_item);
+ free(items);
+ free(pattern_match_item);
return NULL;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index 854c6e8dca..1174c505da 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -1211,7 +1212,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad,
*meta = rss_meta_ptr;
else
rte_free(rss_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c
index 28bc775a2c..b25e5eaad3 100644
--- a/drivers/net/intel/ice/ice_switch_filter.c
+++ b/drivers/net/intel/ice/ice_switch_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1877,14 +1878,14 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
rte_free(sw_meta_ptr);
}
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return 0;
error:
rte_free(list);
rte_free(sw_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return -rte_errno;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* Re: [PATCH v6 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
` (26 preceding siblings ...)
2026-02-19 16:23 ` [PATCH v6 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
@ 2026-02-19 19:08 ` Stephen Hemminger
2026-02-20 9:50 ` Burakov, Anatoly
27 siblings, 1 reply; 297+ messages in thread
From: Stephen Hemminger @ 2026-02-19 19:08 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Thu, 19 Feb 2026 16:22:43 +0000
Anatoly Burakov <anatoly.burakov@intel.com> wrote:
> This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
AI review had some observations here:
Patch 20/27: net/iavf: avoid rte malloc in MAC address operations
in_args_size is always sizeof(list_req) (full 64-entry struct) regardless of how many addresses are populated. The old code computed an exact-fit length. Probably harmless since num_elements governs PF-side parsing, but worth verifying.
Patch 21/27: net/iavf: avoid rte malloc in IPsec operations
Pre-existing: iavf_ipsec_crypto_status_get() response struct uses struct virtchnl_ipsec_cap but the function reads ipsec_status. The old code had the same mismatch. Since you're refactoring this function, consider fixing the response type to struct virtchnl_ipsec_status.
Patch 23/27: net/iavf: avoid rte malloc in irq map config
iavf_config_irq_map_lv_chunk(): double-offset bug. The loop runs for (i = chunk_start; i < chunk_end; i++) but then indexes map_info->qv_maps[i] (should be [i - chunk_start]) and vf->qv_map[chunk_start + i] (should be [i]). For the second chunk, this writes past the local array and reads the wrong qv_map entries.
iavf_config_irq_map(): the num_vectors counting via if (vmi > max_vmi) only works if vector IDs are assigned in monotonically increasing order across the queue iteration. If not (e.g., round-robin where a lower vmi appears after a higher one), the count will be too low. The old code simply used vf->nb_msix which is always correct.
^ permalink raw reply [flat|nested] 297+ messages in thread
* Re: [PATCH v6 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-19 19:08 ` [PATCH v6 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Stephen Hemminger
@ 2026-02-20 9:50 ` Burakov, Anatoly
0 siblings, 0 replies; 297+ messages in thread
From: Burakov, Anatoly @ 2026-02-20 9:50 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
On 2/19/2026 8:08 PM, Stephen Hemminger wrote:
> On Thu, 19 Feb 2026 16:22:43 +0000
> Anatoly Burakov <anatoly.burakov@intel.com> wrote:
>
>> This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
>
> AI review had some observations here:
Thanks Stephen!
>
> Patch 20/27: net/iavf: avoid rte malloc in MAC address operations
>
> in_args_size is always sizeof(list_req) (full 64-entry struct) regardless of how many addresses are populated. The old code computed an exact-fit length. Probably harmless since num_elements governs PF-side parsing, but worth verifying.
>
This is intentional.
> Patch 21/27: net/iavf: avoid rte malloc in IPsec operations
>
> Pre-existing: iavf_ipsec_crypto_status_get() response struct uses struct virtchnl_ipsec_cap but the function reads ipsec_status. The old code had the same mismatch. Since you're refactoring this function, consider fixing the response type to struct virtchnl_ipsec_status.
Since this is a pre-existing bug, we need to fix it and backport it
separately. Good find!
>
> Patch 23/27: net/iavf: avoid rte malloc in irq map config
>
> iavf_config_irq_map_lv_chunk(): double-offset bug. The loop runs for (i = chunk_start; i < chunk_end; i++) but then indexes map_info->qv_maps[i] (should be [i - chunk_start]) and vf->qv_map[chunk_start + i] (should be [i]). For the second chunk, this writes past the local array and reads the wrong qv_map entries.
Good catch, will fix.
> iavf_config_irq_map(): the num_vectors counting via if (vmi > max_vmi) only works if vector IDs are assigned in monotonically increasing order across the queue iteration. If not (e.g., round-robin where a lower vmi appears after a higher one), the count will be too low. The old code simply used vf->nb_msix which is always correct.
That is how it happens. The reason I changed it in the first place is
because *technically* there is no "max value" we can use here that would
be suitable to use because nb_msix comes from VF mailbox. We do,
however, know that vector ID's *are* assigned monotonically and in a
round-robin fashion, and in fact code comments indicate that this is the
assumption.
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v7 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (16 preceding siblings ...)
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
` (26 more replies)
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
19 siblings, 27 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev
This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
IXGBE:
- Remove unnecessary macros and #ifdef's
- Disentangle unrelated flow API code paths
I40E:
- Get rid of global variables and unnecessary allocations
- Reduce code duplication around default RSS keys
- Use more appropriate integer types and definitions
IAVF:
- Remove dead code
- Remove unnecessary allocations
- Separate RSS uninit from hash flow parser uninit
ICE:
- Remove unnecessary allocations
This is done in preparation for further rework.
Note that this patchset depends on driver bug fix patchset [1] (already integrated into next-net-intel)
as well as an IPsec struct fix [2].
[1] https://patches.dpdk.org/project/dpdk/list/?series=37350
[2] https://patches.dpdk.org/project/dpdk/patch/c87355f75826ec90a506dc8d4548e3f6af2b7e93.1771581658.git.anatoly.burakov@intel.com/
v1 -> v2:
- Added more cleanups around rte_malloc usage
v2 -> v3:
- Reworded some commit messages
- Added a new patch for ICE
- Rebased on latest bug fix patches
v3 -> v4:
- Rebased on latest bugfix patchset
v4 -> v5:
- Adjusted typing for queue size
- Fixed missing zero initializations for stack allocations
v5 -> v6:
- Addressed feedback for v3, v4, and v5
- Changed more allocations to be stack based
- Reworked queue and IRQ map related i40e patches for better logic
v6 -> v7:
- Fixed offset logic in IRQ map
- (Hopefully) fixed zero-sized array initialization error for MSVC
Anatoly Burakov (27):
net/ixgbe: remove MAC type check macros
net/ixgbe: remove security-related ifdefery
net/ixgbe: split security and ntuple filters
net/i40e: get rid of global filter variables
net/i40e: make default RSS key global
net/i40e: use unsigned types for queue comparisons
net/i40e: use proper flex len define
net/i40e: remove global pattern variable
net/i40e: avoid rte malloc in tunnel set
net/i40e: avoid rte malloc in RSS RETA operations
net/i40e: avoid rte malloc in MAC/VLAN filtering
net/i40e: avoid rte malloc in VF resource queries
net/i40e: avoid rte malloc in adminq operations
net/i40e: avoid rte malloc in DDP package handling
net/i40e: avoid rte malloc in DDP ptype handling
net/iavf: remove remnants of pipeline mode
net/iavf: decouple hash uninit from parser uninit
net/iavf: avoid rte malloc in VF mailbox for IPsec
net/iavf: avoid rte malloc in RSS configuration
net/iavf: avoid rte malloc in MAC address operations
net/iavf: avoid rte malloc in IPsec operations
net/iavf: avoid rte malloc in queue operations
net/iavf: avoid rte malloc in irq map config
net/ice: avoid rte malloc in RSS RETA operations
net/ice: avoid rte malloc in MAC address operations
net/ice: avoid rte malloc in raw pattern parsing
net/ice: avoid rte malloc in flow pattern match
drivers/net/intel/i40e/i40e_ethdev.c | 370 +++++++----------
drivers/net/intel/i40e/i40e_ethdev.h | 26 +-
drivers/net/intel/i40e/i40e_flow.c | 147 ++++---
drivers/net/intel/i40e/i40e_hash.c | 27 +-
drivers/net/intel/i40e/i40e_hash.h | 3 +
drivers/net/intel/i40e/i40e_pf.c | 26 +-
drivers/net/intel/i40e/rte_pmd_i40e.c | 60 +--
drivers/net/intel/iavf/iavf.h | 7 +-
drivers/net/intel/iavf/iavf_ethdev.c | 37 +-
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 -
drivers/net/intel/iavf/iavf_hash.c | 13 +-
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 320 ++++++--------
drivers/net/intel/iavf/iavf_vchnl.c | 413 ++++++++++---------
drivers/net/intel/ice/ice_acl_filter.c | 3 +-
drivers/net/intel/ice/ice_dcf_ethdev.c | 26 +-
drivers/net/intel/ice/ice_ethdev.c | 29 +-
drivers/net/intel/ice/ice_fdir_filter.c | 19 +-
drivers/net/intel/ice/ice_generic_flow.c | 15 +-
drivers/net/intel/ice/ice_hash.c | 13 +-
drivers/net/intel/ice/ice_switch_filter.c | 5 +-
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 -
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 16 -
drivers/net/intel/ixgbe/ixgbe_flow.c | 228 ++++++----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 -
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -
28 files changed, 826 insertions(+), 1036 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v7 01/27] net/ixgbe: remove MAC type check macros
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
` (25 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The macros used were not informative and did not add any value beyond code
golf, so remove them and make MAC type checks explicit.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
2 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 5dbd659941..7dc02a472b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -137,18 +137,6 @@
#define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
#define IXGBE_MAX_L2_TN_FILTER_NUM 128
-#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
- return -ENOTSUP;\
-} while (0)
-
-#define MAC_TYPE_FILTER_SUP(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
- (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
- (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
- return -ENOTSUP;\
-} while (0)
-
/* Link speed for X550 auto negotiation */
#define IXGBE_LINK_SPEED_X550_AUTONEG (IXGBE_LINK_SPEED_100_FULL | \
IXGBE_LINK_SPEED_1GB_FULL | \
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 6a7edc6377..c8d6237f27 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -654,7 +654,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP_EXT(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540)
+ return -ENOTSUP;
ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
@@ -894,7 +896,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_ethertype_filter(attr, pattern,
actions, filter, error);
@@ -1183,7 +1191,13 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_syn_filter(attr, pattern,
actions, filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 02/27] net/ixgbe: remove security-related ifdefery
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
` (24 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The security library is specified as explicit dependency for ixgbe, so
there is no more need to gate features behind #ifdef blocks that depend
on presence of this library.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 ------
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ---
drivers/net/intel/ixgbe/ixgbe_flow.c | 6 -----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --------------------
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 --
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -----
6 files changed, 52 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 11500a923c..57d929cf2c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -32,9 +32,7 @@
#include <rte_random.h>
#include <dev_driver.h>
#include <rte_hash_crc.h>
-#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
-#endif
#include <rte_os_shim.h>
#include "ixgbe_logs.h"
@@ -1177,11 +1175,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ixgbe_swfw_lock_reset(hw);
-#ifdef RTE_LIB_SECURITY
/* Initialize security_ctx only for primary process*/
if (ixgbe_ipsec_ctx_create(eth_dev))
return -ENOMEM;
-#endif
/* Initialize DCB configuration*/
memset(dcb_config, 0, sizeof(struct ixgbe_dcb_config));
@@ -1362,10 +1358,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
rte_free(eth_dev->data->hash_mac_addrs);
eth_dev->data->hash_mac_addrs = NULL;
err_exit:
-#ifdef RTE_LIB_SECURITY
rte_free(eth_dev->security_ctx);
eth_dev->security_ctx = NULL;
-#endif
return ret;
}
@@ -3148,10 +3142,8 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(dev);
-#ifdef RTE_LIB_SECURITY
rte_free(dev->security_ctx);
dev->security_ctx = NULL;
-#endif
return ret;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 7dc02a472b..32d7b98ed1 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -14,9 +14,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
-#ifdef RTE_LIB_SECURITY
#include "ixgbe_ipsec.h"
-#endif
#include <rte_flow.h>
#include <rte_time.h>
#include <rte_hash.h>
@@ -480,9 +478,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-#ifdef RTE_LIB_SECURITY
struct ixgbe_ipsec ipsec;
-#endif
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index c8d6237f27..491e8bccc5 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,7 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
-#ifdef RTE_LIB_SECURITY
/**
* Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
*/
@@ -282,7 +281,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
}
-#endif
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -663,11 +661,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (filter->proto == IPPROTO_ESP)
return 0;
-#endif
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
@@ -3107,7 +3103,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (ntuple_filter.proto == IPPROTO_ESP) {
if (ret != 0)
@@ -3115,7 +3110,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
flow->is_security = true;
return flow;
}
-#endif
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
index a6454cd1fe..3be0f0492a 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
@@ -460,7 +460,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
-#ifdef RTE_LIB_SECURITY
if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
union ixgbe_crypto_tx_desc_md *md =
(union ixgbe_crypto_tx_desc_md *)mdata;
@@ -474,7 +473,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
tx_offload_mask.sa_idx |= ~0;
tx_offload_mask.sec_pad_len |= ~0;
}
-#endif
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -631,9 +629,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec;
-#endif
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -661,9 +657,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
-#ifdef RTE_LIB_SECURITY
use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
-#endif
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -675,7 +669,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
-#ifdef RTE_LIB_SECURITY
if (use_ipsec) {
union ixgbe_crypto_tx_desc_md *ipsec_mdata =
(union ixgbe_crypto_tx_desc_md *)
@@ -683,7 +676,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.sa_idx = ipsec_mdata->sa_idx;
tx_offload.sec_pad_len = ipsec_mdata->pad_len;
}
-#endif
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -871,10 +863,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
-#ifdef RTE_LIB_SECURITY
if (use_ipsec)
olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
-#endif
m_seg = tx_pkt;
do {
@@ -1505,13 +1495,11 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
}
-#ifdef RTE_LIB_SECURITY
if (rx_status & IXGBE_RXD_STAT_SECP) {
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
}
-#endif
return pkt_flags;
}
@@ -2472,9 +2460,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST) {
if (txq->tx_rs_thresh <= IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
@@ -2629,9 +2615,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
(txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = rte_eth_tx_pkt_prepare_dummy;
@@ -2692,10 +2676,8 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
-#endif
return tx_offload_capa;
}
@@ -2873,10 +2855,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
-#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
RTE_ETH_TX_OFFLOAD_SECURITY);
-#endif
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -3170,10 +3150,8 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
-#endif
return offloads;
}
@@ -5101,10 +5079,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ci_rx_queue *rxq = dev->data->rx_queues[i];
rxq->vector_rx = rx_using_sse;
-#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY);
-#endif
}
}
@@ -5610,7 +5586,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
ixgbe_setup_loopback_link_x540_x550(hw, true);
}
-#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
@@ -5623,7 +5598,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
return ret;
}
}
-#endif
return 0;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.h b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
index 7950e56ee4..33023a3580 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
@@ -99,11 +99,9 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
-#ifdef RTE_LIB_SECURITY
/* inline ipsec related*/
uint64_t sa_idx:8; /**< TX SA database entry index */
uint64_t sec_pad_len:4; /**< padding length */
-#endif
};
};
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
index dca3a20ca0..3f37038e5c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -21,7 +21,6 @@ ixgbe_rxq_rearm(struct ci_rx_queue *rxq)
ci_rxq_rearm(rxq, CI_RX_VEC_LEVEL_SSE);
}
-#ifdef RTE_LIB_SECURITY
static inline void
desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
@@ -56,7 +55,6 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
*rearm2 = _mm_extract_epi32(rearm, 2);
*rearm3 = _mm_extract_epi32(rearm, 3);
}
-#endif
static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
@@ -265,9 +263,7 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ci_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec = rxq->using_ipsec;
-#endif
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -444,10 +440,8 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
desc_to_olflags_v(descs, mbuf_init, vlan_flags, udp_p_flag,
&rx_pkts[pos]);
-#ifdef RTE_LIB_SECURITY
if (unlikely(use_ipsec))
desc_to_olflags_v_ipsec(descs, &rx_pkts[pos]);
-#endif
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 03/27] net/ixgbe: split security and ntuple filters
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
` (23 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
These filters are mashed together even though they almost do not share any
code at all between each other. Separate security filter from ntuple filter
and parse it separately.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 202 ++++++++++++++++-----------
1 file changed, 122 insertions(+), 80 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 491e8bccc5..01cd4f9bde 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,74 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- act = next_no_void_action(actions, NULL);
- if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- const void *conf = act->conf;
- const struct rte_flow_action_security *sec_act;
- struct rte_security_session *session;
- struct ip_spec spec;
-
- if (conf == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- act, "NULL security conf.");
- return -rte_errno;
- }
- /* check if the next not void item is END */
- act = next_no_void_action(actions, act);
- if (act->type != RTE_FLOW_ACTION_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "Not supported action.");
- return -rte_errno;
- }
-
- /* get the IP pattern*/
- item = next_no_void_pattern(pattern, NULL);
- while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- if (item->last ||
- item->type == RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "IP pattern missing.");
- return -rte_errno;
- }
- item = next_no_void_pattern(pattern, item);
- }
- if (item->spec == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
- "NULL IP pattern.");
- return -rte_errno;
- }
-
- filter->proto = IPPROTO_ESP;
- sec_act = (const struct rte_flow_action_security *)conf;
- spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
- if (spec.is_ipv6) {
- const struct rte_flow_item_ipv6 *ipv6 = item->spec;
- spec.spec.ipv6 = *ipv6;
- } else {
- const struct rte_flow_item_ipv4 *ipv4 = item->spec;
- spec.spec.ipv4 = *ipv4;
- }
-
- /*
- * we get pointer to security session from security action,
- * which is const. however, we do need to act on the session, so
- * either we do some kind of pointer based lookup to get session
- * pointer internally (which quickly gets unwieldy for lots of
- * flows case), or we simply cast away constness.
- */
- session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
- return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
- }
-
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -640,6 +572,112 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return 0;
}
+static int
+ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct rte_flow_action_security *security;
+ struct rte_security_session *session;
+ const struct rte_flow_item *item;
+ const struct rte_flow_action *act;
+ struct ip_spec spec;
+ int ret;
+
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error,
+ EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "NULL pattern.");
+ return -rte_errno;
+ }
+ if (actions == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "NULL action.");
+ return -rte_errno;
+ }
+ if (attr == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR,
+ NULL, "NULL attribute.");
+ return -rte_errno;
+ }
+
+ /* check if next non-void action is security */
+ act = next_no_void_action(actions, NULL);
+ if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+ security = act->conf;
+ if (security == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "NULL security action config.");
+ }
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+ if (item->spec == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
+ "NULL IP pattern.");
+ return -rte_errno;
+ }
+ spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
+ if (spec.is_ipv6) {
+ const struct rte_flow_item_ipv6 *ipv6 = item->spec;
+ spec.spec.ipv6 = *ipv6;
+ } else {
+ const struct rte_flow_item_ipv4 *ipv4 = item->spec;
+ spec.spec.ipv4 = *ipv4;
+ }
+
+ /*
+ * we get pointer to security session from security action, which is
+ * const. however, we do need to act on the session, so either we do
+ * some kind of pointer based lookup to get session pointer internally
+ * (which quickly gets unwieldy for lots of flows case), or we simply
+ * cast away constness. the latter path was chosen.
+ */
+ session = RTE_CAST_PTR(struct rte_security_session *, security->security_session);
+ ret = ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
+ if (ret) {
+ rte_flow_error_set(error, -ret,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "Failed to add security session.");
+ return -rte_errno;
+ }
+ return 0;
+}
+
/* a specific function for ixgbe because the flags is specific */
static int
ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
@@ -661,10 +699,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- /* ESP flow not really a flow*/
- if (filter->proto == IPPROTO_ESP)
- return 0;
-
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -3099,18 +3133,19 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
-
- /* ESP flow not really a flow*/
- if (ntuple_filter.proto == IPPROTO_ESP) {
- if (ret != 0)
- goto out;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret) {
flow->is_security = true;
return flow;
}
+ memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
+ actions, &ntuple_filter, error);
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
@@ -3334,6 +3369,13 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
+ return 0;
+
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 04/27] net/i40e: get rid of global filter variables
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (2 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 05/27] net/i40e: make default RSS key global Anatoly Burakov
` (22 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, i40e driver relies on global state to work around the fact that
`rte_flow_validate()` is being called directly from `rte_flow_create()`,
and it not being possible to pass state between two functions. Fix that by
making a small wrapper around validation that will create a dummy context.
Additionally, tunnel filter doesn't appear to be used by anything and so is
omitted from the structure.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 16 ++--
drivers/net/intel/i40e/i40e_flow.c | 117 ++++++++++++++-------------
2 files changed, 68 insertions(+), 65 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index cab6d7e9dc..0de036f2d9 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1304,12 +1304,14 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
-union i40e_filter_t {
- struct rte_eth_ethertype_filter ethertype_filter;
- struct i40e_fdir_filter_conf fdir_filter;
- struct rte_eth_tunnel_filter_conf tunnel_filter;
- struct i40e_tunnel_filter_conf consistent_tunnel_filter;
- struct i40e_rte_flow_rss_conf rss_conf;
+struct i40e_filter_ctx {
+ union {
+ struct rte_eth_ethertype_filter ethertype_filter;
+ struct i40e_fdir_filter_conf fdir_filter;
+ struct i40e_tunnel_filter_conf consistent_tunnel_filter;
+ struct i40e_rte_flow_rss_conf rss_conf;
+ };
+ enum rte_filter_type type;
};
typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
@@ -1317,7 +1319,7 @@ typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
struct i40e_valid_pattern {
enum rte_flow_item_type *items;
parse_filter_t parse_filter;
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2374b9bbca..e611de0c06 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -80,37 +80,37 @@ static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
struct i40e_ethertype_filter *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
@@ -124,7 +124,7 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int
i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
@@ -136,7 +136,7 @@ static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
const struct rte_flow_ops i40e_flow_ops = {
.validate = i40e_flow_validate,
.create = i40e_flow_create,
@@ -145,8 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-static union i40e_filter_t cons_filter;
-static enum rte_filter_type cons_filter_type = RTE_ETH_FILTER_NONE;
/* internal pattern w/o VOID items */
struct rte_flow_item g_items[32];
@@ -1454,10 +1452,9 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct rte_eth_ethertype_filter *ethertype_filter =
- &filter->ethertype_filter;
+ struct rte_eth_ethertype_filter *ethertype_filter = &filter->ethertype_filter;
int ret;
ret = i40e_flow_parse_ethertype_pattern(dev, pattern, error,
@@ -1474,7 +1471,7 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_ETHERTYPE;
+ filter->type = RTE_ETH_FILTER_ETHERTYPE;
return ret;
}
@@ -2549,7 +2546,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
struct i40e_fdir_filter_conf *fdir_filter = &filter->fdir_filter;
int ret;
@@ -2566,7 +2563,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_FDIR;
+ filter->type = RTE_ETH_FILTER_FDIR;
return 0;
}
@@ -2834,10 +2831,9 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_l4_pattern(pattern, error, tunnel_filter);
@@ -2852,7 +2848,7 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3086,10 +3082,9 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_vxlan_pattern(dev, pattern,
@@ -3105,7 +3100,7 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3338,10 +3333,9 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_nvgre_pattern(dev, pattern,
@@ -3357,7 +3351,7 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3495,10 +3489,9 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_mpls_pattern(dev, pattern,
@@ -3514,7 +3507,7 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3648,10 +3641,9 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_gtp_pattern(dev, pattern,
@@ -3667,7 +3659,7 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3766,10 +3758,9 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_qinq_pattern(dev, pattern,
@@ -3785,16 +3776,17 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
static int
-i40e_flow_validate(struct rte_eth_dev *dev,
+i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
+ struct i40e_filter_ctx *filter_ctx,
struct rte_flow_error *error)
{
struct rte_flow_item *items; /* internal pattern w/o VOID items */
@@ -3823,7 +3815,6 @@ i40e_flow_validate(struct rte_eth_dev *dev,
NULL, "NULL attribute.");
return -rte_errno;
}
- memset(&cons_filter, 0, sizeof(cons_filter));
/* Get the non-void item of action */
while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID)
@@ -3834,9 +3825,8 @@ i40e_flow_validate(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_HASH;
- return i40e_hash_parse(dev, pattern, actions + i,
- &cons_filter.rss_conf, error);
+ filter_ctx->type = RTE_ETH_FILTER_HASH;
+ return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error);
}
i = 0;
@@ -3878,8 +3868,7 @@ i40e_flow_validate(struct rte_eth_dev *dev,
}
if (parse_filter)
- ret = parse_filter(dev, attr, items, actions,
- error, &cons_filter);
+ ret = parse_filter(dev, attr, items, actions, error, filter_ctx);
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
@@ -3890,6 +3879,19 @@ i40e_flow_validate(struct rte_eth_dev *dev,
return ret;
}
+static int
+i40e_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ /* creates dummy context */
+ struct i40e_filter_ctx filter_ctx = {0};
+
+ return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
+}
+
static struct rte_flow *
i40e_flow_create(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -3898,15 +3900,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_filter_ctx filter_ctx = {0};
struct rte_flow *flow = NULL;
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
- ret = i40e_flow_validate(dev, attr, pattern, actions, error);
+ ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
if (ret < 0)
return NULL;
- if (cons_filter_type == RTE_ETH_FILTER_FDIR) {
+ if (filter_ctx.type == RTE_ETH_FILTER_FDIR) {
/* if this is the first time we're creating an fdir flow */
if (pf->fdir.fdir_vsi == NULL) {
ret = i40e_fdir_setup(pf);
@@ -3947,18 +3950,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
}
- switch (cons_filter_type) {
+ switch (filter_ctx.type) {
case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_set(pf,
- &cons_filter.ethertype_filter, 1);
+ ret = i40e_ethertype_filter_set(pf, &filter_ctx.ethertype_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->ethertype.ethertype_list,
i40e_ethertype_filter_list);
break;
case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev,
- &cons_filter.fdir_filter, 1);
+ ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
@@ -3966,14 +3967,14 @@ i40e_flow_create(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_dev_consistent_tunnel_filter_set(pf,
- &cons_filter.consistent_tunnel_filter, 1);
+ &filter_ctx.consistent_tunnel_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->tunnel.tunnel_list,
i40e_tunnel_filter_list);
break;
case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_create(pf, &cons_filter.rss_conf);
+ ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->rss_config_list,
@@ -3983,7 +3984,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
goto free_flow;
}
- flow->filter_type = cons_filter_type;
+ flow->filter_type = filter_ctx.type;
TAILQ_INSERT_TAIL(&pf->flow_list, flow, node);
return flow;
@@ -3992,7 +3993,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
"Failed to create flow.");
- if (cons_filter_type != RTE_ETH_FILTER_FDIR)
+ if (filter_ctx.type != RTE_ETH_FILTER_FDIR)
rte_free(flow);
else
i40e_fdir_entry_pool_put(fdir_info, flow);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 05/27] net/i40e: make default RSS key global
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (3 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
` (21 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, there are multiple places where we need a default RSS key, but
in each of those places we define it as a local variable. Make it global
constant, and adjust all callers to use the global constant. When dealing
with adminq, we cannot send down the constant because adminq commands do
not guarantee const-ness, so copy RSS key into a local buffer before
sending it down to hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 22 ++++++++++------------
drivers/net/intel/i40e/i40e_hash.c | 23 +++++++++++++++++------
drivers/net/intel/i40e/i40e_hash.h | 3 +++
3 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index b891215191..9ab8c35621 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9080,23 +9080,21 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
int
i40e_pf_reset_rss_key(struct i40e_pf *pf)
{
- const uint8_t key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- uint8_t *rss_key;
+ uint8_t key_buf[I40E_RSS_KEY_LEN];
+ const uint8_t *rss_key;
/* Reset key */
rss_key = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key;
- if (!rss_key ||
- pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < key_len) {
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+ if (!rss_key || pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < sizeof(key_buf))
+ rss_key = i40e_rss_key_default;
- rss_key = (uint8_t *)rss_key_default;
- }
+ /*
+ * adminq does not guarantee const-ness of RSS key once a command is sent down, so make a
+ * local copy.
+ */
+ memcpy(&key_buf, rss_key, sizeof(key_buf));
- return i40e_set_rss_key(pf->main_vsi, rss_key, key_len);
+ return i40e_set_rss_key(pf->main_vsi, key_buf, sizeof(key_buf));
}
static int
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index 3149682197..f20b40e7d0 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -233,6 +233,22 @@ struct i40e_hash_match_pattern {
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+const uint8_t i40e_rss_key_default[] = {
+ 0x44, 0x39, 0x79, 0x6b,
+ 0xb5, 0x4c, 0x50, 0x23,
+ 0xb6, 0x75, 0xea, 0x5b,
+ 0x12, 0x4f, 0x9f, 0x30,
+ 0xb8, 0xa2, 0xc0, 0x3d,
+ 0xdf, 0xdc, 0x4d, 0x02,
+ 0xa0, 0x8c, 0x9b, 0x33,
+ 0x4a, 0xf6, 0x4a, 0x4c,
+ 0x05, 0xc6, 0xfa, 0x34,
+ 0x39, 0x58, 0xd8, 0x55,
+ 0x7d, 0x99, 0x58, 0x3a,
+ 0xe1, 0x38, 0xc9, 0x2e,
+ 0x81, 0x15, 0x03, 0x66
+};
+
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
*/
@@ -910,17 +926,12 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act,
const uint8_t *key = rss_act->key;
if (!key || rss_act->key_len != sizeof(rss_conf->key)) {
- const uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
-
if (rss_act->key_len != sizeof(rss_conf->key))
PMD_DRV_LOG(WARNING,
"RSS key length invalid, must be %u bytes, now set key to default",
(uint32_t)sizeof(rss_conf->key));
- memcpy(rss_conf->key, rss_key_default, sizeof(rss_conf->key));
+ memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key));
} else {
memcpy(rss_conf->key, key, sizeof(rss_conf->key));
}
diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h
index ff8c91c030..2513d84565 100644
--- a/drivers/net/intel/i40e/i40e_hash.h
+++ b/drivers/net/intel/i40e/i40e_hash.h
@@ -27,6 +27,9 @@ int i40e_hash_filter_destroy(struct i40e_pf *pf,
const struct i40e_rss_filter *rss_filter);
int i40e_hash_filter_flush(struct i40e_pf *pf);
+#define I40E_RSS_KEY_LEN ((I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t))
+extern const uint8_t i40e_rss_key_default[I40E_RSS_KEY_LEN];
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 06/27] net/i40e: use unsigned types for queue comparisons
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (4 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 05/27] net/i40e: make default RSS key global Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 07/27] net/i40e: use proper flex len define Anatoly Burakov
` (20 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when we compare queue numbers against maximum traffic class
value of 64, we do not use unsigned values, which results in compiler
warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
value. Make it unsigned 16-bit, and adjust callers to use correct types.
As a consequence, `i40e_align_floor` now returns unsigned value as well -
this is correct, because nothing about that function implies signed usage
being a valid use case.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 11 ++++++-----
drivers/net/intel/i40e/i40e_ethdev.h | 8 ++++----
drivers/net/intel/i40e/i40e_hash.c | 4 ++--
3 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 9ab8c35621..27fa789e21 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8976,11 +8976,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
/* Calculate the maximum number of contiguous PF queues that are configured */
-int
+uint16_t
i40e_pf_calc_configured_queues_num(struct i40e_pf *pf)
{
struct rte_eth_dev_data *data = pf->dev_data;
- int i, num;
+ int i;
+ uint16_t num;
struct ci_rx_queue *rxq;
num = 0;
@@ -9056,7 +9057,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
struct i40e_hw *hw = &pf->adapter->hw;
uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
- int num;
+ uint16_t num;
/* If both VMDQ and RSS enabled, not all of PF queues are
* configured. It's necessary to calculate the actual PF
@@ -9072,7 +9073,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
return 0;
for (i = 0; i < hw->func_caps.rss_table_size; i++)
- lut[i] = (uint8_t)(i % (uint32_t)num);
+ lut[i] = (uint8_t)(i % num);
return i40e_set_rss_lut(pf->main_vsi, lut, (uint16_t)i);
}
@@ -10769,7 +10770,7 @@ i40e_vsi_update_queue_mapping(struct i40e_vsi *vsi,
PMD_INIT_LOG(ERR, " number of queues is less that tcs.");
return I40E_ERR_INVALID_QP_ID;
}
- qpnum_per_tc = RTE_MIN(i40e_align_floor(qpnum_per_tc),
+ qpnum_per_tc = RTE_MIN((uint16_t)i40e_align_floor(qpnum_per_tc),
I40E_MAX_Q_PER_TC);
bsf = rte_bsf32(qpnum_per_tc);
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 0de036f2d9..ca6638b32c 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -24,7 +24,7 @@
#define I40E_AQ_LEN 32
#define I40E_AQ_BUF_SZ 4096
/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
-#define I40E_MAX_Q_PER_TC 64
+#define I40E_MAX_Q_PER_TC UINT16_C(64)
#define I40E_NUM_DESC_DEFAULT 512
#define I40E_NUM_DESC_ALIGN 32
#define I40E_BUF_SIZE_MIN 1024
@@ -1456,7 +1456,7 @@ int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
void i40e_flex_payload_reg_set_default(struct i40e_hw *hw);
void i40e_pf_disable_rss(struct i40e_pf *pf);
-int i40e_pf_calc_configured_queues_num(struct i40e_pf *pf);
+uint16_t i40e_pf_calc_configured_queues_num(struct i40e_pf *pf);
int i40e_pf_reset_rss_reta(struct i40e_pf *pf);
int i40e_pf_reset_rss_key(struct i40e_pf *pf);
int i40e_pf_config_rss(struct i40e_pf *pf);
@@ -1517,8 +1517,8 @@ i40e_init_adminq_parameter(struct i40e_hw *hw)
hw->aq.asq_buf_size = I40E_AQ_BUF_SZ;
}
-static inline int
-i40e_align_floor(int n)
+static inline uint32_t
+i40e_align_floor(uint32_t n)
{
if (n == 0)
return 0;
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index f20b40e7d0..5756ebf255 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -949,7 +949,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
struct i40e_pf *pf;
struct i40e_hw *hw;
uint16_t i;
- int max_queue;
+ uint16_t max_queue;
hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (!rss_act->queue_num ||
@@ -971,7 +971,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
for (i = 0; i < rss_act->queue_num; i++) {
- if ((int)rss_act->queue[i] >= max_queue)
+ if (rss_act->queue[i] >= max_queue)
break;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 07/27] net/i40e: use proper flex len define
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (5 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 08/27] net/i40e: remove global pattern variable Anatoly Burakov
` (19 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
For FDIR, we have byte arrays that are supposed to be limited to whatever
the HW supports in terms of flex descriptor matching. However, in the
structure definition, spec and mask bytes are using different array length
defines, and the only reason why this works is because they evaluate to the
same value.
Use the i40e-specific definition instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index ca6638b32c..d57c53f661 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -631,7 +631,7 @@ struct i40e_fdir_flex_pit {
/* A structure used to contain extend input of flow */
struct i40e_fdir_flow_ext {
uint16_t vlan_tci;
- uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
+ uint8_t flexbytes[I40E_FDIR_MAX_FLEX_LEN];
/* It is filled by the flexible payload to match. */
uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN];
uint8_t raw_id;
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 08/27] net/i40e: remove global pattern variable
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (6 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 07/27] net/i40e: use proper flex len define Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
` (18 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
When parsing flow patterns, current code cleans up the pattern list by
removing void flow items, and copies the patterns into an array. That
array, when dealing with under 32 flow items, is allocated on the stack,
but when the pattern is big enough, a new list is dynamically allocated.
This allocated list is a global variable, and is allocated using
rte_zmalloc call which seems like overkill for this use case.
Remove the global variable, and replace the split behavior with
unconditional allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 30 ++++++++++--------------------
1 file changed, 10 insertions(+), 20 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index e611de0c06..2791139e59 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -145,9 +146,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-/* internal pattern w/o VOID items */
-struct rte_flow_item g_items[32];
-
/* Pattern matched ethertype filter */
static enum rte_flow_item_type pattern_ethertype[] = {
RTE_FLOW_ITEM_TYPE_ETH,
@@ -3837,19 +3835,13 @@ i40e_flow_check(struct rte_eth_dev *dev,
i++;
}
item_num++;
-
- if (item_num <= ARRAY_SIZE(g_items)) {
- items = g_items;
- } else {
- items = rte_zmalloc("i40e_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
- if (!items) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL,
- "No memory for PMD internal items.");
- return -ENOMEM;
- }
+ items = calloc(item_num, sizeof(struct rte_flow_item));
+ if (items == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL,
+ "No memory for PMD internal items.");
+ return -ENOMEM;
}
i40e_pattern_skip_void_item(items, pattern);
@@ -3862,8 +3854,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- if (items != g_items)
- rte_free(items);
+ free(items);
return -rte_errno;
}
@@ -3873,8 +3864,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
- if (items != g_items)
- rte_free(items);
+ free(items);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 09/27] net/i40e: avoid rte malloc in tunnel set
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (7 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 08/27] net/i40e: remove global pattern variable Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (17 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when setting tunnel configuration, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory and the allocation size is pretty small, so replace it
with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 124 ++++++++++++---------------
1 file changed, 53 insertions(+), 71 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 27fa789e21..f9e86b82b7 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8509,38 +8509,27 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
struct i40e_pf_vf *vf = NULL;
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_vsi *vsi;
- struct i40e_aqc_cloud_filters_element_bb *cld_filter;
- struct i40e_aqc_cloud_filters_element_bb *pfilter;
+ struct i40e_aqc_cloud_filters_element_bb cld_filter = {0};
struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *tunnel, *node;
+ struct i40e_tunnel_filter *node;
struct i40e_tunnel_filter check_filter; /* Check if filter exists */
uint32_t teid_le;
bool big_buffer = 0;
- cld_filter = rte_zmalloc("tunnel_filter",
- sizeof(struct i40e_aqc_add_rm_cloud_filt_elem_ext),
- 0);
-
- if (cld_filter == NULL) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- return -ENOMEM;
- }
- pfilter = cld_filter;
-
rte_ether_addr_copy(&tunnel_filter->outer_mac,
- (struct rte_ether_addr *)&pfilter->element.outer_mac);
+ (struct rte_ether_addr *)&cld_filter.element.outer_mac);
rte_ether_addr_copy(&tunnel_filter->inner_mac,
- (struct rte_ether_addr *)&pfilter->element.inner_mac);
+ (struct rte_ether_addr *)&cld_filter.element.inner_mac);
- pfilter->element.inner_vlan =
+ cld_filter.element.inner_vlan =
rte_cpu_to_le_16(tunnel_filter->inner_vlan);
if (tunnel_filter->ip_type == I40E_TUNNEL_IPTYPE_IPV4) {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v4.data,
&ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
+ sizeof(cld_filter.element.ipaddr.v4.data));
} else {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
for (i = 0; i < 4; i++) {
@@ -8548,9 +8537,9 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
rte_cpu_to_le_32(rte_be_to_cpu_32(
tunnel_filter->ip_addr.ipv6_addr[i]));
}
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v6.data,
&convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
+ sizeof(cld_filter.element.ipaddr.v6.data));
}
/* check tunneled type */
@@ -8571,11 +8560,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x40;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOUDP;
@@ -8587,11 +8576,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x0;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOGRE;
@@ -8603,11 +8592,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8618,11 +8607,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8639,8 +8628,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
* Big Buffer should be set, see changes in
* i40e_aq_add_cloud_filters
*/
- pfilter->general_fields[0] = tunnel_filter->inner_vlan;
- pfilter->general_fields[1] = tunnel_filter->outer_vlan;
+ cld_filter.general_fields[0] = tunnel_filter->inner_vlan;
+ cld_filter.general_fields[1] = tunnel_filter->outer_vlan;
big_buffer = 1;
break;
case I40E_CLOUD_TYPE_UDP:
@@ -8655,20 +8644,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->sport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
} else {
@@ -8680,20 +8669,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->dport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
}
@@ -8702,48 +8691,46 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
default:
/* Other tunnel types is not supported. */
PMD_DRV_LOG(ERR, "tunnel type is not supported.");
- rte_free(cld_filter);
return -EINVAL;
}
if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoUDP)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoGRE)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPC)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPU)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_QINQ)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_SCTP) {
if (tunnel_filter->l4_port_type == I40E_L4_PORT_TYPE_SRC)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
} else {
val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
- &pfilter->element.flags);
+ &cld_filter.element.flags);
if (val < 0) {
- rte_free(cld_filter);
return -EINVAL;
}
}
- pfilter->element.flags |= rte_cpu_to_le_16(
- I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE |
- ip_type | (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
- pfilter->element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->element.queue_number =
+ cld_filter.element.flags |=
+ rte_cpu_to_le_16(I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+ (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
+ cld_filter.element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
+ cld_filter.element.queue_number =
rte_cpu_to_le_16(tunnel_filter->queue_id);
if (!tunnel_filter->is_to_vf)
@@ -8751,7 +8738,6 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
else {
if (tunnel_filter->vf_id >= pf->vf_num) {
PMD_DRV_LOG(ERR, "Invalid argument.");
- rte_free(cld_filter);
return -EINVAL;
}
vf = &pf->vfs[tunnel_filter->vf_id];
@@ -8760,38 +8746,36 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
/* Check if there is the filter in SW list */
memset(&check_filter, 0, sizeof(check_filter));
- i40e_tunnel_filter_convert(cld_filter, &check_filter);
+ i40e_tunnel_filter_convert(&cld_filter, &check_filter);
check_filter.is_to_vf = tunnel_filter->is_to_vf;
check_filter.vf_id = tunnel_filter->vf_id;
node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &check_filter.input);
if (add && node) {
PMD_DRV_LOG(ERR, "Conflict with existing tunnel rules!");
- rte_free(cld_filter);
return -EINVAL;
}
if (!add && !node) {
PMD_DRV_LOG(ERR, "There's no corresponding tunnel filter!");
- rte_free(cld_filter);
return -EINVAL;
}
if (add) {
+ struct i40e_tunnel_filter *tunnel;
+
if (big_buffer)
ret = i40e_aq_add_cloud_filters_bb(hw,
- vsi->seid, cld_filter, 1);
+ vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_add_cloud_filters(hw,
- vsi->seid, &cld_filter->element, 1);
+ vsi->seid, &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to add a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
tunnel = rte_zmalloc("tunnel_filter", sizeof(*tunnel), 0);
if (tunnel == NULL) {
PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- rte_free(cld_filter);
return -ENOMEM;
}
@@ -8802,19 +8786,17 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
} else {
if (big_buffer)
ret = i40e_aq_rem_cloud_filters_bb(
- hw, vsi->seid, cld_filter, 1);
+ hw, vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter->element, 1);
+ &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to delete a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
ret = i40e_sw_tunnel_filter_del(pf, &node->input);
}
- rte_free(cld_filter);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 10/27] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (8 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
` (16 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This memory does
not need to be stored in hugepage memory and the maximum LUT size is 512
bytes, so replace it with stack-based allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 25 ++++---------------------
1 file changed, 4 insertions(+), 21 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index f9e86b82b7..ba66f9e3fd 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4619,7 +4619,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512] = {0};
int ret;
if (reta_size != lut_size ||
@@ -4630,14 +4630,9 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
ret = i40e_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
shift = i % RTE_ETH_RETA_GROUP_SIZE;
@@ -4648,9 +4643,6 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
pf->adapter->rss_reta_updated = 1;
-out:
- rte_free(lut);
-
return ret;
}
@@ -4662,7 +4654,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512] = {0};
int ret;
if (reta_size != lut_size ||
@@ -4673,15 +4665,9 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
-
ret = i40e_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
shift = i % RTE_ETH_RETA_GROUP_SIZE;
@@ -4689,9 +4675,6 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
reta_conf[idx].reta[shift] = lut[i];
}
-out:
- rte_free(lut);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (9 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
` (15 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding, removing, or configuring MAC and VLAN filters,
we are using rte_zmalloc followed by an immediate rte_free. This memory
does not need to be stored in hugepage memory, so replace it with regular
malloc/free or stack allocation where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 135 +++++++++++---------------
drivers/net/intel/i40e/rte_pmd_i40e.c | 17 ++--
2 files changed, 65 insertions(+), 87 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index ba66f9e3fd..672d337d99 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6,6 +6,7 @@
#include <stdio.h>
#include <errno.h>
#include <stdint.h>
+#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
@@ -4128,7 +4129,6 @@ static int
i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_mac_filter_info *mac_filter;
struct i40e_vsi *vsi = pf->main_vsi;
struct rte_eth_rxmode *rxmode;
struct i40e_mac_filter *f;
@@ -4163,12 +4163,12 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ struct i40e_mac_filter_info mac_filter[I40E_NUM_MACADDR_MAX] = {0};
i = 0;
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
- if (mac_filter == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
+
+ if (num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Too many MAC addresses");
return I40E_ERR_NO_MEMORY;
}
@@ -4206,7 +4206,6 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (ret)
PMD_DRV_LOG(ERR, "i40e vsi add mac fail.");
}
- rte_free(mac_filter);
}
if (mask & RTE_ETH_QINQ_STRIP_MASK) {
@@ -6198,7 +6197,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
int i, num;
struct i40e_mac_filter *f;
void *temp;
- struct i40e_mac_filter_info *mac_filter;
+ struct i40e_mac_filter_info mac_filter[I40E_NUM_MACADDR_MAX] = {0};
enum i40e_mac_filter_type desired_filter;
int ret = I40E_SUCCESS;
@@ -6211,12 +6210,9 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
}
num = vsi->mac_num;
-
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
- if (mac_filter == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Too many MAC addresses");
+ return -1;
}
i = 0;
@@ -6228,7 +6224,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
if (ret) {
PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan filter",
on ? "enable" : "disable");
- goto DONE;
+ return ret;
}
i++;
}
@@ -6240,13 +6236,11 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
if (ret) {
PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan filter",
on ? "enable" : "disable");
- goto DONE;
+ return ret;
}
}
-DONE:
- rte_free(mac_filter);
- return ret;
+ return 0;
}
/* Configure vlan stripping on or off */
@@ -7128,19 +7122,20 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
uint16_t flags;
int ret = I40E_SUCCESS;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- struct i40e_aqc_add_macvlan_element_data *req_list;
+ uint8_t aq_buff[I40E_AQ_BUF_SZ] = {0};
+ struct i40e_aqc_add_macvlan_element_data *req_list =
+ (struct i40e_aqc_add_macvlan_element_data *)aq_buff;
+
+ if (hw->aq.asq_buf_size > I40E_AQ_BUF_SZ) {
+ PMD_DRV_LOG(ERR, "AdminQ size biffer than max");
+ return I40E_ERR_NO_MEMORY;
+ }
if (filter == NULL || total == 0)
return I40E_ERR_PARAM;
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0);
- if (req_list == NULL) {
- PMD_DRV_LOG(ERR, "Fail to allocate memory");
- return I40E_ERR_NO_MEMORY;
- }
-
num = 0;
do {
actual_num = (num + ele_num > total) ? (total - num) : ele_num;
@@ -7169,8 +7164,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
break;
default:
PMD_DRV_LOG(ERR, "Invalid MAC match type");
- ret = I40E_ERR_PARAM;
- goto DONE;
+ return I40E_ERR_PARAM;
}
req_list[i].queue_number = 0;
@@ -7182,14 +7176,11 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
actual_num, NULL);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to add macvlan filter");
- goto DONE;
+ return ret;
}
num += actual_num;
} while (num < total);
-
-DONE:
- rte_free(req_list);
- return ret;
+ return I40E_SUCCESS;
}
int
@@ -7202,21 +7193,22 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
uint16_t flags;
int ret = I40E_SUCCESS;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- struct i40e_aqc_remove_macvlan_element_data *req_list;
+ uint8_t aq_buff[I40E_AQ_BUF_SZ] = {0};
+ struct i40e_aqc_remove_macvlan_element_data *req_list =
+ (struct i40e_aqc_remove_macvlan_element_data *)aq_buff;
enum i40e_admin_queue_err aq_status;
if (filter == NULL || total == 0)
return I40E_ERR_PARAM;
- ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
- ele_buff_size = hw->aq.asq_buf_size;
-
- req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0);
- if (req_list == NULL) {
- PMD_DRV_LOG(ERR, "Fail to allocate memory");
+ if (hw->aq.asq_buf_size > I40E_AQ_BUF_SZ) {
+ PMD_DRV_LOG(ERR, "AdminQ size biffer than max");
return I40E_ERR_NO_MEMORY;
}
+ ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
+ ele_buff_size = hw->aq.asq_buf_size;
+
num = 0;
do {
actual_num = (num + ele_num > total) ? (total - num) : ele_num;
@@ -7245,8 +7237,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
break;
default:
PMD_DRV_LOG(ERR, "Invalid MAC filter type");
- ret = I40E_ERR_PARAM;
- goto DONE;
+ return I40E_ERR_PARAM;
}
req_list[i].flags = rte_cpu_to_le_16(flags);
}
@@ -7260,15 +7251,13 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
ret = I40E_SUCCESS;
} else {
PMD_DRV_LOG(ERR, "Failed to remove macvlan filter");
- goto DONE;
+ return ret;
}
}
num += actual_num;
} while (num < total);
-DONE:
- rte_free(req_list);
- return ret;
+ return I40E_SUCCESS;
}
/* Find out specific MAC filter */
@@ -7436,7 +7425,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
else
num = vsi->mac_num * vsi->vlan_num;
- mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0);
+ mv_f = calloc(num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7465,7 +7454,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
ret = i40e_remove_macvlan_filters(vsi, mv_f, num);
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7473,7 +7462,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
int
i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
{
- struct i40e_macvlan_filter *mv_f;
+ struct i40e_macvlan_filter mv_f[I40E_NUM_MACADDR_MAX] = {0};
int mac_num;
int ret = I40E_SUCCESS;
@@ -7490,37 +7479,31 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr");
return I40E_ERR_PARAM;
}
-
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
-
- if (mv_f == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (mac_num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Error! Too many MAC addresses");
+ return I40E_ERR_PARAM;
}
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
i40e_set_vlan_filter(vsi, vlan, 1);
vsi->vlan_num++;
- ret = I40E_SUCCESS;
-DONE:
- rte_free(mv_f);
- return ret;
+ return I40E_SUCCESS;
}
int
i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
{
- struct i40e_macvlan_filter *mv_f;
+ struct i40e_macvlan_filter mv_f[I40E_NUM_MACADDR_MAX] = {0};
int mac_num;
int ret = I40E_SUCCESS;
@@ -7541,42 +7524,36 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr");
return I40E_ERR_PARAM;
}
-
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
-
- if (mv_f == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (mac_num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Error! Too many MAC addresses");
+ return I40E_ERR_PARAM;
}
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_remove_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
/* This is last vlan to remove, replace all mac filter with vlan 0 */
if (vsi->vlan_num == 1) {
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, 0);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
}
i40e_set_vlan_filter(vsi, vlan, 0);
vsi->vlan_num--;
- ret = I40E_SUCCESS;
-DONE:
- rte_free(mv_f);
- return ret;
+ return I40E_SUCCESS;
}
int
@@ -7607,7 +7584,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
mac_filter->filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7646,7 +7623,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7677,7 +7654,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7706,7 +7683,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index a358f68bc5..4839a1d9bf 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -2,6 +2,7 @@
* Copyright(c) 2010-2017 Intel Corporation
*/
+#include <stdlib.h>
#include <eal_export.h>
#include <rte_string_fns.h>
#include <rte_malloc.h>
@@ -233,7 +234,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -250,18 +251,18 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
@@ -294,7 +295,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
f->mac_info.filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -312,18 +313,18 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 12/27] net/i40e: avoid rte malloc in VF resource queries
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (10 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
` (14 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when responding to VF resource queries, we are dynamically
allocating a temporary buffer with rte_zmalloc followed by an immediate
rte_free. This memory does not need to be stored in hugepage memory and
the allocation size is pretty small, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/i40e_pf.c | 26 +++++++-------------------
1 file changed, 7 insertions(+), 19 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_pf.c b/drivers/net/intel/i40e/i40e_pf.c
index ebe1deeade..08cdd6bc4d 100644
--- a/drivers/net/intel/i40e/i40e_pf.c
+++ b/drivers/net/intel/i40e/i40e_pf.c
@@ -309,9 +309,14 @@ static int
i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
bool b_op)
{
- struct virtchnl_vf_resource *vf_res = NULL;
+ /* only have 1 VSI by default */
+ struct {
+ struct virtchnl_vf_resource vf_res;
+ struct virtchnl_vsi_resource vsi_res;
+ } res = {0};
+ struct virtchnl_vf_resource *vf_res = &res.vf_res;
struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
- uint32_t len = 0;
+ uint32_t len = sizeof(res);
uint64_t default_hena = I40E_RSS_HENA_ALL;
int ret = I40E_SUCCESS;
@@ -322,20 +327,6 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
return ret;
}
- /* only have 1 VSI by default */
- len = sizeof(struct virtchnl_vf_resource) +
- I40E_DEFAULT_VF_VSI_NUM *
- sizeof(struct virtchnl_vsi_resource);
-
- vf_res = rte_zmalloc("i40e_vf_res", len, 0);
- if (vf_res == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate mem");
- ret = I40E_ERR_NO_MEMORY;
- vf_res = NULL;
- len = 0;
- goto send_msg;
- }
-
if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
VIRTCHNL_VF_OFFLOAD_VLAN;
@@ -377,11 +368,8 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
rte_ether_addr_copy(&vf->mac_addr,
(struct rte_ether_addr *)vf_res->vsi_res[0].default_mac_addr);
-send_msg:
i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_GET_VF_RESOURCES,
ret, (uint8_t *)vf_res, len);
- rte_free(vf_res);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 13/27] net/i40e: avoid rte malloc in adminq operations
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (11 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
` (13 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing admin queue messages, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 672d337d99..cd648285d1 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6870,14 +6870,11 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_arq_event_info info;
uint16_t pending, opcode;
+ uint8_t msg_buf[I40E_AQ_BUF_SZ] = {0};
int ret;
- info.buf_len = I40E_AQ_BUF_SZ;
- info.msg_buf = rte_zmalloc("msg_buffer", info.buf_len, 0);
- if (!info.msg_buf) {
- PMD_DRV_LOG(ERR, "Failed to allocate mem");
- return;
- }
+ info.buf_len = sizeof(msg_buf);
+ info.msg_buf = msg_buf;
pending = 1;
while (pending) {
@@ -6913,7 +6910,6 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
break;
}
}
- rte_free(info.msg_buf);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 14/27] net/i40e: avoid rte malloc in DDP package handling
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (12 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
` (12 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing Dynamic Driver Profile (DDP) packages and
checking profile information, we are using rte_zmalloc followed by
immediate rte_free. This memory does not need to be stored in hugepage
memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/rte_pmd_i40e.c | 43 +++++----------------------
1 file changed, 8 insertions(+), 35 deletions(-)
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index 4839a1d9bf..7892fa8a4e 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -1557,7 +1557,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
{
struct rte_eth_dev *dev = &rte_eth_devices[port];
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- uint8_t *buff;
+ uint8_t buff[(I40E_MAX_PROFILE_NUM + 4) * I40E_PROFILE_INFO_SIZE] = {0};
struct rte_pmd_i40e_profile_list *p_list;
struct rte_pmd_i40e_profile_info *pinfo, *p;
uint32_t i;
@@ -1570,13 +1570,6 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
PMD_DRV_LOG(INFO, "Read-only profile.");
return 0;
}
- buff = rte_zmalloc("pinfo_list",
- (I40E_PROFILE_INFO_SIZE * I40E_MAX_PROFILE_NUM + 4),
- 0);
- if (!buff) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return -1;
- }
ret = i40e_aq_get_ddp_list(
hw, (void *)buff,
@@ -1584,7 +1577,6 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
0, NULL);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get profile info list.");
- rte_free(buff);
return -1;
}
p_list = (struct rte_pmd_i40e_profile_list *)buff;
@@ -1592,20 +1584,17 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
p = &p_list->p_info[i];
if (pinfo->track_id == p->track_id) {
PMD_DRV_LOG(INFO, "Profile exists.");
- rte_free(buff);
return 1;
}
}
/* profile with group id 0xff is compatible with any other profile */
if ((pinfo->track_id & group_mask) == group_mask) {
- rte_free(buff);
return 0;
}
for (i = 0; i < p_list->p_count; i++) {
p = &p_list->p_info[i];
if ((p->track_id & group_mask) == 0) {
PMD_DRV_LOG(INFO, "Profile of the group 0 exists.");
- rte_free(buff);
return 2;
}
}
@@ -1616,12 +1605,9 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
if ((pinfo->track_id & group_mask) !=
(p->track_id & group_mask)) {
PMD_DRV_LOG(INFO, "Profile of different group exists.");
- rte_free(buff);
return 3;
}
}
-
- rte_free(buff);
return 0;
}
@@ -1637,7 +1623,10 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
struct i40e_generic_seg_header *profile_seg_hdr;
struct i40e_generic_seg_header *metadata_seg_hdr;
uint32_t track_id;
- uint8_t *profile_info_sec;
+ struct {
+ struct i40e_profile_section_header sec;
+ struct i40e_profile_info info;
+ } profile_info_sec = {0};
int is_exist;
enum i40e_status_code status = I40E_SUCCESS;
static const uint32_t type_mask = 0xff000000;
@@ -1702,26 +1691,15 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
return -EINVAL;
}
- profile_info_sec = rte_zmalloc(
- "i40e_profile_info",
- sizeof(struct i40e_profile_section_header) +
- sizeof(struct i40e_profile_info),
- 0);
- if (!profile_info_sec) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- return -EINVAL;
- }
-
/* Check if the profile already loaded */
i40e_generate_profile_info_sec(
((struct i40e_profile_segment *)profile_seg_hdr)->name,
&((struct i40e_profile_segment *)profile_seg_hdr)->version,
- track_id, profile_info_sec,
+ track_id, (uint8_t* )&profile_info_sec,
op == RTE_PMD_I40E_PKG_OP_WR_ADD);
- is_exist = i40e_check_profile_info(port, profile_info_sec);
+ is_exist = i40e_check_profile_info(port, (uint8_t* )&profile_info_sec);
if (is_exist < 0) {
PMD_DRV_LOG(ERR, "Failed to check profile.");
- rte_free(profile_info_sec);
return -EINVAL;
}
@@ -1734,13 +1712,11 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
else if (is_exist == 3)
PMD_DRV_LOG(ERR, "Profile of different group already exists");
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return -EEXIST;
}
} else if (op == RTE_PMD_I40E_PKG_OP_WR_DEL) {
if (is_exist != 1) {
PMD_DRV_LOG(ERR, "Profile does not exist.");
- rte_free(profile_info_sec);
return -EACCES;
}
}
@@ -1752,7 +1728,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
track_id);
if (status) {
PMD_DRV_LOG(ERR, "Failed to write profile for delete.");
- rte_free(profile_info_sec);
return status;
}
} else {
@@ -1765,14 +1740,13 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
PMD_DRV_LOG(ERR, "Failed to write profile for add.");
else
PMD_DRV_LOG(ERR, "Failed to write profile.");
- rte_free(profile_info_sec);
return status;
}
}
if (track_id && (op != RTE_PMD_I40E_PKG_OP_WR_ONLY)) {
/* Modify loaded profiles info list */
- status = i40e_add_rm_profile_info(hw, profile_info_sec);
+ status = i40e_add_rm_profile_info(hw, (uint8_t* )&profile_info_sec);
if (status) {
if (op == RTE_PMD_I40E_PKG_OP_WR_ADD)
PMD_DRV_LOG(ERR, "Failed to add profile to info list.");
@@ -1785,7 +1759,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
op == RTE_PMD_I40E_PKG_OP_WR_DEL)
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return status;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 15/27] net/i40e: avoid rte malloc in DDP ptype handling
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (13 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
` (11 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating customized protocol and packet type information
via DDP packages, we are using rte_zmalloc followed by immediate
rte_free. This memory does not need to be stored in hugepage memory, so
replace it with stack allocation or regular calloc/free as appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 43 ++++++++--------------------
1 file changed, 12 insertions(+), 31 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index cd648285d1..af736f59be 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -11716,8 +11716,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint32_t pctype_num;
- struct rte_pmd_i40e_ptype_info *pctype;
- uint32_t buff_size;
+ struct rte_pmd_i40e_ptype_info pctype[I40E_CUSTOMIZED_MAX] = {0};
struct i40e_customized_pctype *new_pctype = NULL;
uint8_t proto_id;
uint8_t pctype_value;
@@ -11743,19 +11742,16 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
return -1;
}
- buff_size = pctype_num * sizeof(struct rte_pmd_i40e_proto_info);
- pctype = rte_zmalloc("new_pctype", buff_size, 0);
- if (!pctype) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
+ if (pctype_num > RTE_DIM(pctype)) {
+ PMD_DRV_LOG(ERR, "Pctype number exceeds maximum supported");
return -1;
}
/* get information about new pctype list */
ret = rte_pmd_i40e_get_ddp_info(pkg, pkg_size,
- (uint8_t *)pctype, buff_size,
+ (uint8_t *)pctype, sizeof(pctype),
RTE_PMD_I40E_PKG_INFO_PCTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get pctype list");
- rte_free(pctype);
return -1;
}
@@ -11836,7 +11832,6 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
}
- rte_free(pctype);
return 0;
}
@@ -11846,11 +11841,10 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
struct rte_pmd_i40e_proto_info *proto,
enum rte_pmd_i40e_package_op op)
{
- struct rte_pmd_i40e_ptype_mapping *ptype_mapping;
+ struct rte_pmd_i40e_ptype_mapping ptype_mapping[I40E_MAX_PKT_TYPE] = {0};
uint16_t port_id = dev->data->port_id;
uint32_t ptype_num;
- struct rte_pmd_i40e_ptype_info *ptype;
- uint32_t buff_size;
+ struct rte_pmd_i40e_ptype_info ptype[I40E_MAX_PKT_TYPE] = {0};
uint8_t proto_id;
char name[RTE_PMD_I40E_DDP_NAME_SIZE];
uint32_t i, j, n;
@@ -11881,31 +11875,20 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
return -1;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_info);
- ptype = rte_zmalloc("new_ptype", buff_size, 0);
- if (!ptype) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
+ if (ptype_num > RTE_DIM(ptype)) {
+ PMD_DRV_LOG(ERR, "Too many ptypes");
return -1;
}
/* get information about new ptype list */
ret = rte_pmd_i40e_get_ddp_info(pkg, pkg_size,
- (uint8_t *)ptype, buff_size,
+ (uint8_t *)ptype, sizeof(ptype),
RTE_PMD_I40E_PKG_INFO_PTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get ptype list");
- rte_free(ptype);
return ret;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_mapping);
- ptype_mapping = rte_zmalloc("ptype_mapping", buff_size, 0);
- if (!ptype_mapping) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- rte_free(ptype);
- return -1;
- }
-
/* Update ptype mapping table. */
for (i = 0; i < ptype_num; i++) {
ptype_mapping[i].hw_ptype = ptype[i].ptype_id;
@@ -12040,8 +12023,6 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(ERR, "Failed to update ptype mapping table.");
- rte_free(ptype_mapping);
- rte_free(ptype);
return ret;
}
@@ -12076,7 +12057,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = proto_num * sizeof(struct rte_pmd_i40e_proto_info);
- proto = rte_zmalloc("new_proto", buff_size, 0);
+ proto = calloc(proto_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!proto) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return;
@@ -12088,7 +12069,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PROTOCOL_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get protocol list");
- rte_free(proto);
+ free(proto);
return;
}
@@ -12126,7 +12107,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(INFO, "No ptype is updated.");
- rte_free(proto);
+ free(proto);
}
/* Create a QinQ cloud filter
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 16/27] net/iavf: remove remnants of pipeline mode
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (14 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 17/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
` (10 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
When pipeline mode was removed, some of the things used by pipelines were
left in the code. Remove them as they are unused.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
drivers/net/intel/iavf/iavf_hash.c | 1 -
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
5 files changed, 19 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c
index 0ef6e0d04a..9eae874800 100644
--- a/drivers/net/intel/iavf/iavf_fdir.c
+++ b/drivers/net/intel/iavf/iavf_fdir.c
@@ -1632,7 +1632,6 @@ static struct iavf_flow_parser iavf_fdir_parser = {
.array = iavf_fdir_pattern,
.array_len = RTE_DIM(iavf_fdir_pattern),
.parse_pattern_action = iavf_fdir_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fdir_engine_register)
diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c
index cf1030320f..bfb34695de 100644
--- a/drivers/net/intel/iavf/iavf_fsub.c
+++ b/drivers/net/intel/iavf/iavf_fsub.c
@@ -814,7 +814,6 @@ iavf_flow_parser iavf_fsub_parser = {
.array = iavf_fsub_pattern_list,
.array_len = RTE_DIM(iavf_fsub_pattern_list),
.parse_pattern_action = iavf_fsub_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fsub_engine_init)
diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h
index b11bb4cf2b..b97cf8b7ff 100644
--- a/drivers/net/intel/iavf/iavf_generic_flow.h
+++ b/drivers/net/intel/iavf/iavf_generic_flow.h
@@ -485,20 +485,6 @@ enum iavf_flow_engine_type {
IAVF_FLOW_ENGINE_MAX,
};
-/**
- * classification stages.
- * for non-pipeline mode, we have two classification stages: Distributor/RSS
- * for pipeline-mode we have three classification stages:
- * Permission/Distributor/RSS
- */
-enum iavf_flow_classification_stage {
- IAVF_FLOW_STAGE_NONE = 0,
- IAVF_FLOW_STAGE_IPSEC_CRYPTO,
- IAVF_FLOW_STAGE_RSS,
- IAVF_FLOW_STAGE_DISTRIBUTOR,
- IAVF_FLOW_STAGE_MAX,
-};
-
/* Struct to store engine created. */
struct iavf_flow_engine {
TAILQ_ENTRY(iavf_flow_engine) node;
@@ -527,7 +513,6 @@ struct iavf_flow_parser {
struct iavf_pattern_match_item *array;
uint32_t array_len;
parse_pattern_action_t parse_pattern_action;
- enum iavf_flow_classification_stage stage;
};
/* Struct to store parser created. */
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index 1725c2b2b9..a40fed7542 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -691,7 +691,6 @@ static struct iavf_flow_parser iavf_hash_parser = {
.array = iavf_hash_pattern_list,
.array_len = RTE_DIM(iavf_hash_pattern_list),
.parse_pattern_action = iavf_hash_parse_pattern_action,
- .stage = IAVF_FLOW_STAGE_RSS,
};
int
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index ab41b1973e..82323b9aa9 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -1983,7 +1983,6 @@ static struct iavf_flow_parser iavf_ipsec_flow_parser = {
.array = iavf_ipsec_flow_pattern,
.array_len = RTE_DIM(iavf_ipsec_flow_pattern),
.parse_pattern_action = iavf_ipsec_flow_parse,
- .stage = IAVF_FLOW_STAGE_IPSEC_CRYPTO,
};
RTE_INIT(iavf_ipsec_flow_engine_register)
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 17/27] net/iavf: decouple hash uninit from parser uninit
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (15 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 18/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
` (9 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, parser deinitialization will trigger removal of current RSS
configuration. This should not be done as part of parser deinitialization,
but should rather be a separate step in dev close flow.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/iavf/iavf.h | 1 +
drivers/net/intel/iavf/iavf_ethdev.c | 3 +++
drivers/net/intel/iavf/iavf_hash.c | 12 ++++++++----
3 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 39949acc11..6054321771 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -566,4 +566,5 @@ void iavf_dev_watchdog_disable(struct iavf_adapter *adapter);
void iavf_handle_hw_reset(struct rte_eth_dev *dev, bool vf_initiated_reset);
void iavf_set_no_poll(struct iavf_adapter *adapter, bool link_change);
bool is_iavf_supported(struct rte_eth_dev *dev);
+void iavf_hash_uninit(struct iavf_adapter *ad);
#endif /* _IAVF_ETHDEV_H_ */
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 954bce723d..b45da4d8b1 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -2977,6 +2977,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
/* free iAVF security device context all related resources */
iavf_security_ctx_destroy(adapter);
+ /* remove RSS configuration */
+ iavf_hash_uninit(adapter);
+
iavf_flow_flush(dev, NULL);
iavf_flow_uninit(adapter);
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index a40fed7542..cb10eeab78 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -77,7 +77,7 @@ static int
iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
struct rte_flow_error *error);
static void
-iavf_hash_uninit(struct iavf_adapter *ad);
+iavf_hash_uninit_parser(struct iavf_adapter *ad);
static void
iavf_hash_free(struct rte_flow *flow);
static int
@@ -680,7 +680,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
.init = iavf_hash_init,
.create = iavf_hash_create,
.destroy = iavf_hash_destroy,
- .uninit = iavf_hash_uninit,
+ .uninit = iavf_hash_uninit_parser,
.free = iavf_hash_free,
.type = IAVF_FLOW_ENGINE_HASH,
};
@@ -1641,6 +1641,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
}
static void
+iavf_hash_uninit_parser(struct iavf_adapter *ad)
+{
+ iavf_unregister_parser(&iavf_hash_parser, ad);
+}
+
+void
iavf_hash_uninit(struct iavf_adapter *ad)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
@@ -1658,8 +1664,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
PMD_DRV_LOG(ERR, "fail to delete default RSS");
-
- iavf_unregister_parser(&iavf_hash_parser, ad);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 18/27] net/iavf: avoid rte malloc in VF mailbox for IPsec
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (16 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 17/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
` (8 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when calling down into the VF mailbox, IPsec code will use
rte_malloc to allocate VF message structures. This memory does not need
to be stored in hugepage memory and the allocation size is pretty small,
so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 181 +++++++++------------
1 file changed, 78 insertions(+), 103 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 82323b9aa9..29609e4447 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -458,36 +458,31 @@ static uint32_t
iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
struct rte_security_session_conf *conf)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- struct virtchnl_ipsec_sa_cfg *sa_cfg;
- size_t request_len, response_len;
-
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg sa_cfg;
+ } sa_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp;
+ } sa_resp;
+ struct inline_ipsec_msg *request = &sa_req.msg;
+ struct inline_ipsec_msg *response = &sa_resp.msg;
+ struct virtchnl_ipsec_sa_cfg *sa_cfg = &sa_req.sa_cfg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg);
-
- request = rte_malloc("iavf-sad-add-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_cfg_resp);
- response = rte_malloc("iavf-sad-add-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
+ /*
+ * MSVC doesn't allow inline initialization of structs with zero-sized
+ * members so we have to memset them instead.
+ */
+ memset(&sa_req, 0, sizeof(sa_req));
+ memset(&sa_resp, 0, sizeof(sa_resp));
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_CREATE;
request->req_id = (uint16_t)0xDEADBEEF;
/* set SA configuration params */
- sa_cfg = (struct virtchnl_ipsec_sa_cfg *)(request + 1);
-
sa_cfg->spi = conf->ipsec.spi;
sa_cfg->virtchnl_protocol_type = VIRTCHNL_PROTO_ESP;
sa_cfg->virtchnl_direction =
@@ -541,10 +536,10 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sa_req),
+ (uint8_t *)response, sizeof(sa_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -552,9 +547,6 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
rc = -EFAULT;
else
rc = response->ipsec_data.sa_cfg_resp->sa_handle;
-update_cleanup:
- rte_free(response);
- rte_free(request);
return rc;
}
@@ -722,18 +714,24 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
bool is_udp,
uint16_t udp_port)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg sp_cfg;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_cfg_resp sp_cfg_resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg);
- request = rte_malloc("iavf-inbound-security-policy-add-request",
- request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
+ /*
+ * MSVC doesn't allow inline initialization of structs with zero-sized
+ * members so we have to memset them instead.
+ */
+ memset(&sp_req, 0, sizeof(sp_req));
+ memset(&sp_resp, 0, sizeof(sp_resp));
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_CREATE;
@@ -768,21 +766,12 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
request->ipsec_data.sp_cfg->is_udp = is_udp;
request->ipsec_data.sp_cfg->udp_port = htons(udp_port);
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_cfg_resp);
- response = rte_malloc("iavf-inbound-security-policy-add-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -791,10 +780,6 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.sp_cfg_resp->rule_id;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -802,25 +787,24 @@ static uint32_t
iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_update sa_update;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp ipsec_resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_update);
- request = rte_malloc("iavf-sa-update-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-update-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
+ /*
+ * MSVC doesn't allow inline initialization of structs with zero-sized
+ * members so we have to memset them instead.
+ */
+ memset(&sp_req, 0, sizeof(sp_req));
+ memset(&sp_resp, 0, sizeof(sp_resp));
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_UPDATE;
@@ -833,10 +817,10 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -845,10 +829,6 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.ipsec_resp->resp;
-update_cleanup:
- rte_free(request);
- rte_free(response);
-
return rc;
}
@@ -899,25 +879,24 @@ int
iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
uint8_t is_v4, uint32_t flow_id)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sp_destroy sp_destroy;
+ } sp_req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } sp_resp;
+ struct inline_ipsec_msg *request = &sp_req.msg;
+ struct inline_ipsec_msg *response = &sp_resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sp_destroy);
- request = rte_malloc("iavf-sp-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sp-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
+ /*
+ * MSVC doesn't allow inline initialization of structs with zero-sized
+ * members so we have to memset them instead.
+ */
+ memset(&sp_req, 0, sizeof(sp_req));
+ memset(&sp_resp, 0, sizeof(sp_resp));
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SP_DESTROY;
@@ -931,21 +910,17 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(sp_req),
+ (uint8_t *)response, sizeof(sp_resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
response->req_id != request->req_id)
rc = -EFAULT;
else
- return response->ipsec_data.ipsec_status->status;
-
-update_cleanup:
- rte_free(request);
- rte_free(response);
+ rc = response->ipsec_data.ipsec_status->status;
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 19/27] net/iavf: avoid rte malloc in RSS configuration
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (17 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 18/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
` (7 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring RSS (redirection table, lookup table, and
hash key), we are using rte_zmalloc followed by an immediate rte_free.
This memory does not need to be stored in hugepage memory, and in context
of IAVF we do not define how big these structures can be, so replace the
allocations with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++--
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index b45da4d8b1..4e0df2ca05 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1553,7 +1553,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1573,7 +1573,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = iavf_configure_rss_lut(adapter);
if (ret) /* revert back */
rte_memcpy(vf->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 9ad39300c6..55986ef909 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1159,7 +1159,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
- rss_lut = rte_zmalloc("rss_lut", len, 0);
+ rss_lut = calloc(1, len);
if (!rss_lut)
return -ENOMEM;
@@ -1178,7 +1178,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_LUT");
- rte_free(rss_lut);
+ free(rss_lut);
return err;
}
@@ -1191,7 +1191,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
- rss_key = rte_zmalloc("rss_key", len, 0);
+ rss_key = calloc(1, len);
if (!rss_key)
return -ENOMEM;
@@ -1210,7 +1210,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_KEY");
- rte_free(rss_key);
+ free(rss_key);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 20/27] net/iavf: avoid rte malloc in MAC address operations
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (18 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
` (6 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This memory does not need
to be stored in hugepage memory, so replace it with stack allocation.
The original code also had a loop attempting to split up sending MAC
address list into multiple virtchnl messages. However, maximum number of
MAC addresses is 64, sizeof() each MAC address is 8 bytes, and maximum
virtchnl message size is 4K, so splitting it up is actually unnecessary.
This loop has been removed.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 82 ++++++++++-------------------
1 file changed, 29 insertions(+), 53 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 55986ef909..f44dc7e7be 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1380,63 +1380,39 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
void
iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
{
- struct virtchnl_ether_addr_list *list;
+ struct {
+ struct virtchnl_ether_addr_list list;
+ struct virtchnl_ether_addr addr[IAVF_NUM_MACADDR_MAX];
+ } list_req = {0};
+ struct virtchnl_ether_addr_list *list = &list_req.list;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct rte_ether_addr *addr;
- struct iavf_cmd_info args;
- int len, err, i, j;
- int next_begin = 0;
- int begin = 0;
+ struct iavf_cmd_info args = {0};
+ int err, i;
- do {
- j = 0;
- len = sizeof(struct virtchnl_ether_addr_list);
- for (i = begin; i < IAVF_NUM_MACADDR_MAX; i++, next_begin++) {
- addr = &adapter->dev_data->mac_addrs[i];
- if (rte_is_zero_ether_addr(addr))
- continue;
- len += sizeof(struct virtchnl_ether_addr);
- if (len >= IAVF_AQ_BUF_SZ) {
- next_begin = i + 1;
- break;
- }
- }
+ for (i = 0; i < IAVF_NUM_MACADDR_MAX; i++) {
+ struct rte_ether_addr *addr = &adapter->dev_data->mac_addrs[i];
+ struct virtchnl_ether_addr *vc_addr = &list->list[list->num_elements];
- list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
- if (!list) {
- PMD_DRV_LOG(ERR, "fail to allocate memory");
- return;
- }
+ /* ignore empty addresses */
+ if (rte_is_zero_ether_addr(addr))
+ continue;
+ list->num_elements++;
- for (i = begin; i < next_begin; i++) {
- addr = &adapter->dev_data->mac_addrs[i];
- if (rte_is_zero_ether_addr(addr))
- continue;
- rte_memcpy(list->list[j].addr, addr->addr_bytes,
- sizeof(addr->addr_bytes));
- list->list[j].type = (j == 0 ?
- VIRTCHNL_ETHER_ADDR_PRIMARY :
- VIRTCHNL_ETHER_ADDR_EXTRA);
- PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
- RTE_ETHER_ADDR_BYTES(addr));
- j++;
- }
- list->vsi_id = vf->vsi_res->vsi_id;
- list->num_elements = j;
- args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
- VIRTCHNL_OP_DEL_ETH_ADDR;
- args.in_args = (uint8_t *)list;
- args.in_args_size = len;
- args.out_buffer = vf->aq_resp;
- args.out_size = IAVF_AQ_BUF_SZ;
- err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
- if (err)
- PMD_DRV_LOG(ERR, "fail to execute command %s",
- add ? "OP_ADD_ETHER_ADDRESS" :
- "OP_DEL_ETHER_ADDRESS");
- rte_free(list);
- begin = next_begin;
- } while (begin < IAVF_NUM_MACADDR_MAX);
+ memcpy(vc_addr->addr, addr->addr_bytes, sizeof(addr->addr_bytes));
+ vc_addr->type = (list->num_elements == 1) ?
+ VIRTCHNL_ETHER_ADDR_PRIMARY :
+ VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+ list->vsi_id = vf->vsi_res->vsi_id;
+ args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.in_args = (uint8_t *)list;
+ args.in_args_size = sizeof(list_req);
+ args.out_buffer = vf->aq_resp;
+ args.out_size = IAVF_AQ_BUF_SZ;
+ err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" : "OP_DEL_ETHER_ADDRESS");
}
int
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 21/27] net/iavf: avoid rte malloc in IPsec operations
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (19 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
` (5 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when performing IPsec security association operations and
retrieving device capabilities, we are using rte_malloc followed by
immediate rte_free. This memory does not need to be stored in hugepage
memory and the allocation size can be pretty small, so replace with stack
allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 138 +++++++++------------
1 file changed, 56 insertions(+), 82 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 29609e4447..650f805f0a 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -3,6 +3,7 @@
*/
#include <stdalign.h>
+#include <stdlib.h>
#include <rte_cryptodev.h>
#include <rte_ethdev.h>
@@ -929,28 +930,23 @@ static uint32_t
iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
struct iavf_security_session *sess)
{
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
-
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_sa_destroy sa_destroy;
+ } req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_resp resp;
+ } resp;
+ struct inline_ipsec_msg *request = &req.msg, *response = &resp.msg;
int rc = 0;
- request_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_sa_destroy);
-
- request = rte_malloc("iavf-sa-del-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_resp);
-
- response = rte_malloc("iavf-sa-del-response", response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
+ /*
+ * MSVC doesn't allow inline initialization of structs with zero-sized
+ * membersm so we have to memset them instead.
+ */
+ memset(&req, 0, sizeof(req));
+ memset(&resp, 0, sizeof(resp));
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_SA_DESTROY;
@@ -969,10 +965,10 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(req),
+ (uint8_t *)response, sizeof(resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response */
if (response->ipsec_opcode != request->ipsec_opcode ||
@@ -987,10 +983,6 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
response->ipsec_data.ipsec_status->status)
rc = -EFAULT;
-update_cleanup:
- rte_free(response);
- rte_free(request);
-
return rc;
}
@@ -1136,27 +1128,23 @@ static int
iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
struct virtchnl_ipsec_cap *capability)
{
+ struct {
+ struct inline_ipsec_msg msg;
+ } req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_cap cap;
+ } resp;
/* Perform pf-vf comms */
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct inline_ipsec_msg *request = &req.msg, *response = &resp.msg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg);
-
- request = rte_malloc("iavf-device-capability-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-capability-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
+ /*
+ * MSVC doesn't allow inline initialization of structs with zero-sized
+ * membersm so we have to memset them instead.
+ */
+ memset(&req, 0, sizeof(req));
+ memset(&resp, 0, sizeof(resp));
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_GET_CAP;
@@ -1164,23 +1152,18 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(req),
+ (uint8_t *)response, sizeof(resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
response->req_id != request->req_id){
- rc = -EFAULT;
- goto update_cleanup;
+ return -EFAULT;
}
memcpy(capability, response->ipsec_data.ipsec_cap, sizeof(*capability));
-update_cleanup:
- rte_free(response);
- rte_free(request);
-
return rc;
}
@@ -1562,26 +1545,22 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
struct virtchnl_ipsec_status *status)
{
/* Perform pf-vf comms */
- struct inline_ipsec_msg *request = NULL, *response = NULL;
- size_t request_len, response_len;
+ struct {
+ struct inline_ipsec_msg msg;
+ } req;
+ struct {
+ struct inline_ipsec_msg msg;
+ struct virtchnl_ipsec_status status;
+ } resp;
+ struct inline_ipsec_msg *request = &req.msg, *response = &resp.msg;
int rc;
- request_len = sizeof(struct inline_ipsec_msg);
-
- request = rte_malloc("iavf-device-status-request", request_len, 0);
- if (request == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
-
- response_len = sizeof(struct inline_ipsec_msg) +
- sizeof(struct virtchnl_ipsec_status);
- response = rte_malloc("iavf-device-status-response",
- response_len, 0);
- if (response == NULL) {
- rc = -ENOMEM;
- goto update_cleanup;
- }
+ /*
+ * MSVC doesn't allow inline initialization of structs with zero-sized
+ * membersm so we have to memset them instead.
+ */
+ memset(&req, 0, sizeof(req));
+ memset(&resp, 0, sizeof(resp));
/* set msg header params */
request->ipsec_opcode = INLINE_IPSEC_OP_GET_STATUS;
@@ -1589,23 +1568,18 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
/* send virtual channel request to add SA to hardware database */
rc = iavf_ipsec_crypto_request(adapter,
- (uint8_t *)request, request_len,
- (uint8_t *)response, response_len);
+ (uint8_t *)request, sizeof(req),
+ (uint8_t *)response, sizeof(resp));
if (rc)
- goto update_cleanup;
+ return rc;
/* verify response id */
if (response->ipsec_opcode != request->ipsec_opcode ||
- response->req_id != request->req_id){
- rc = -EFAULT;
- goto update_cleanup;
+ response->req_id != request->req_id){
+ return -EFAULT;
}
memcpy(status, response->ipsec_data.ipsec_status, sizeof(*status));
-update_cleanup:
- rte_free(response);
- rte_free(request);
-
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 22/27] net/iavf: avoid rte malloc in queue operations
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (20 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
` (4 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when enabling, disabling, or switching queues, we are using
rte_malloc followed by an immediate rte_free. This is not needed as these
structures are not being stored anywhere, so replace them with stack
allocation.
The original code did not check maximum queue number, because the design
was built around an anti-pattern of caller having to chunk the queue
configuration. This has now been corrected as well.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf.h | 3 +-
drivers/net/intel/iavf/iavf_ethdev.c | 15 +-
drivers/net/intel/iavf/iavf_vchnl.c | 212 ++++++++++++++-------------
3 files changed, 115 insertions(+), 115 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 6054321771..77a2c94290 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -503,8 +503,7 @@ int iavf_disable_queues(struct iavf_adapter *adapter);
int iavf_disable_queues_lv(struct iavf_adapter *adapter);
int iavf_configure_rss_lut(struct iavf_adapter *adapter);
int iavf_configure_rss_key(struct iavf_adapter *adapter);
-int iavf_configure_queues(struct iavf_adapter *adapter,
- uint16_t num_queue_pairs, uint16_t index);
+int iavf_configure_queues(struct iavf_adapter *adapter, uint16_t num_queue_pairs);
int iavf_get_supported_rxdid(struct iavf_adapter *adapter);
int iavf_config_vlan_strip_v2(struct iavf_adapter *adapter, bool enable);
int iavf_config_vlan_insert_v2(struct iavf_adapter *adapter, bool enable);
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 4e0df2ca05..6e216f4c0f 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1036,20 +1036,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
if (iavf_set_vf_quanta_size(adapter, index, num_queue_pairs) != 0)
PMD_DRV_LOG(WARNING, "configure quanta size failed");
- /* If needed, send configure queues msg multiple times to make the
- * adminq buffer length smaller than the 4K limitation.
- */
- while (num_queue_pairs > IAVF_CFG_Q_NUM_PER_BUF) {
- if (iavf_configure_queues(adapter,
- IAVF_CFG_Q_NUM_PER_BUF, index) != 0) {
- PMD_DRV_LOG(ERR, "configure queues failed");
- goto error;
- }
- num_queue_pairs -= IAVF_CFG_Q_NUM_PER_BUF;
- index += IAVF_CFG_Q_NUM_PER_BUF;
- }
-
- if (iavf_configure_queues(adapter, num_queue_pairs, index) != 0) {
+ if (iavf_configure_queues(adapter, num_queue_pairs) != 0) {
PMD_DRV_LOG(ERR, "configure queues failed");
goto error;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index f44dc7e7be..f0ab3b950b 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1020,19 +1020,15 @@ int
iavf_enable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1048,7 +1044,7 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_ENABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1056,7 +1052,6 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_ENABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1064,19 +1059,15 @@ int
iavf_disable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1092,7 +1083,7 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1100,7 +1091,6 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1109,17 +1099,14 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
bool rx, bool on)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = 1;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1138,7 +1125,7 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
else
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1146,7 +1133,6 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
PMD_DRV_LOG(ERR, "Failed to execute command of %s",
on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1214,88 +1200,116 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
return err;
}
-int
-iavf_configure_queues(struct iavf_adapter *adapter,
- uint16_t num_queue_pairs, uint16_t index)
+static void
+iavf_configure_queue_pair(struct iavf_adapter *adapter,
+ struct virtchnl_queue_pair_info *vc_qp,
+ uint16_t q_idx)
{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
struct ci_rx_queue **rxq = (struct ci_rx_queue **)adapter->dev_data->rx_queues;
struct ci_tx_queue **txq = (struct ci_tx_queue **)adapter->dev_data->tx_queues;
+
+ /* common parts */
+ vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+ vc_qp->txq.queue_id = q_idx;
+
+ vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+ vc_qp->rxq.queue_id = q_idx;
+ vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+
+ /* is this txq active? */
+ if (q_idx < adapter->dev_data->nb_tx_queues) {
+ vc_qp->txq.ring_len = txq[q_idx]->nb_tx_desc;
+ vc_qp->txq.dma_ring_addr = txq[q_idx]->tx_ring_dma;
+ }
+
+ /* is this rxq active? */
+ if (q_idx >= adapter->dev_data->nb_rx_queues)
+ return;
+
+ vc_qp->rxq.ring_len = rxq[q_idx]->nb_rx_desc;
+ vc_qp->rxq.dma_ring_addr = rxq[q_idx]->rx_ring_phys_addr;
+ vc_qp->rxq.databuffer_size = rxq[q_idx]->rx_buf_len;
+ vc_qp->rxq.crc_disable = rxq[q_idx]->crc_len != 0 ? 1 : 0;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ if (vf->supported_rxdid & RTE_BIT64(rxq[q_idx]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[q_idx]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, q_idx);
+ } else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[q_idx]->rxdid, IAVF_RXDID_LEGACY_1, q_idx);
+ vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
+ }
+
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP &&
+ vf->ptp_caps & VIRTCHNL_1588_PTP_CAP_RX_TSTAMP &&
+ rxq[q_idx]->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ vc_qp->rxq.flags |= VIRTCHNL_PTP_RX_TSTAMP;
+ }
+}
+
+static int
+iavf_configure_queue_chunk(struct iavf_adapter *adapter,
+ uint16_t chunk_sz,
+ uint16_t chunk_start)
+{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_vsi_queue_config_info *vc_config;
- struct virtchnl_queue_pair_info *vc_qp;
- struct iavf_cmd_info args;
- uint16_t i, size;
+ struct {
+ struct virtchnl_vsi_queue_config_info config;
+ struct virtchnl_queue_pair_info qp[IAVF_CFG_Q_NUM_PER_BUF];
+ } queue_req = {0};
+ struct iavf_cmd_info args = {0};
+ struct virtchnl_vsi_queue_config_info *vc_config = &queue_req.config;
+ struct virtchnl_queue_pair_info *vc_qp = vc_config->qpair;
+ uint16_t chunk_end = chunk_start + chunk_sz;
+ uint16_t i;
int err;
- size = sizeof(*vc_config) +
- sizeof(vc_config->qpair[0]) * num_queue_pairs;
- vc_config = rte_zmalloc("cfg_queue", size, 0);
- if (!vc_config)
- return -ENOMEM;
+ if (chunk_sz > IAVF_CFG_Q_NUM_PER_BUF)
+ return -EINVAL;
vc_config->vsi_id = vf->vsi_res->vsi_id;
- vc_config->num_queue_pairs = num_queue_pairs;
+ vc_config->num_queue_pairs = chunk_sz;
- for (i = index, vc_qp = vc_config->qpair;
- i < index + num_queue_pairs;
- i++, vc_qp++) {
- vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
- vc_qp->txq.queue_id = i;
+ for (i = chunk_start; i < chunk_end; i++, vc_qp++)
+ iavf_configure_queue_pair(adapter, vc_qp, i);
- /* Virtchnnl configure tx queues by pairs */
- if (i < adapter->dev_data->nb_tx_queues) {
- vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
- vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
- }
-
- vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
- vc_qp->rxq.queue_id = i;
- vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
-
- if (i >= adapter->dev_data->nb_rx_queues)
- continue;
-
- /* Virtchnnl configure rx queues by pairs */
- vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
- vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
- vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
- vc_qp->rxq.crc_disable = rxq[i]->crc_len != 0 ? 1 : 0;
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
- if (vf->supported_rxdid & RTE_BIT64(rxq[i]->rxdid)) {
- vc_qp->rxq.rxdid = rxq[i]->rxdid;
- PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
- vc_qp->rxq.rxdid, i);
- } else {
- PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
- "request default RXDID[%d] in Queue[%d]",
- rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
- vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- }
-
- if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP &&
- vf->ptp_caps & VIRTCHNL_1588_PTP_CAP_RX_TSTAMP &&
- rxq[i]->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
- vc_qp->rxq.flags |= VIRTCHNL_PTP_RX_TSTAMP;
- }
- }
-
- memset(&args, 0, sizeof(args));
args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
args.in_args = (uint8_t *)vc_config;
- args.in_args_size = size;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
if (err)
- PMD_DRV_LOG(ERR, "Failed to execute command of"
- " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
-
- rte_free(vc_config);
+ PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL_OP_CONFIG_VSI_QUEUES");
return err;
}
+int
+iavf_configure_queues(struct iavf_adapter *adapter, uint16_t num_queue_pairs)
+{
+ uint16_t c;
+ int err;
+
+ /*
+ * we cannot configure all queues in one go because they won't fit into
+ * adminq buffer, so we're going to chunk them instead
+ */
+ for (c = 0; c < num_queue_pairs; c += IAVF_CFG_Q_NUM_PER_BUF) {
+ uint16_t chunk_sz = RTE_MIN(num_queue_pairs - c, IAVF_CFG_Q_NUM_PER_BUF);
+ err = iavf_configure_queue_chunk(adapter, chunk_sz, c);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to configure queues chunk [%u, %u)",
+ c, c + chunk_sz);
+ return err;
+ }
+ }
+ return 0;
+}
+
int
iavf_config_irq_map(struct iavf_adapter *adapter)
{
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 23/27] net/iavf: avoid rte malloc in irq map config
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (21 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (3 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring IRQ maps, we are using rte_zmalloc followed
by an immediate rte_free. This memory does not need to be stored in
hugepage memory, so replace it with stack allocation.
The original code did not check maximum IRQ map size, because the design
was built around an anti-pattern of caller having to chunk the IRQ map
calls. This has now been corrected as well.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf.h | 3 +-
drivers/net/intel/iavf/iavf_ethdev.c | 15 +---
drivers/net/intel/iavf/iavf_vchnl.c | 111 +++++++++++++++++----------
3 files changed, 73 insertions(+), 56 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 77a2c94290..f9bb398a77 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -511,8 +511,7 @@ int iavf_add_del_vlan_v2(struct iavf_adapter *adapter, uint16_t vlanid,
bool add);
int iavf_get_vlan_offload_caps_v2(struct iavf_adapter *adapter);
int iavf_config_irq_map(struct iavf_adapter *adapter);
-int iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
- uint16_t index);
+int iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num);
void iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add);
int iavf_dev_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 6e216f4c0f..26e7febecf 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -919,20 +919,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
goto config_irq_map_err;
}
} else {
- uint16_t num_qv_maps = dev->data->nb_rx_queues;
- uint16_t index = 0;
-
- while (num_qv_maps > IAVF_IRQ_MAP_NUM_PER_BUF) {
- if (iavf_config_irq_map_lv(adapter,
- IAVF_IRQ_MAP_NUM_PER_BUF, index)) {
- PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
- goto config_irq_map_err;
- }
- num_qv_maps -= IAVF_IRQ_MAP_NUM_PER_BUF;
- index += IAVF_IRQ_MAP_NUM_PER_BUF;
- }
-
- if (iavf_config_irq_map_lv(adapter, num_qv_maps, index)) {
+ if (iavf_config_irq_map_lv(adapter, dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
goto config_irq_map_err;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index f0ab3b950b..dce4122410 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1314,81 +1314,112 @@ int
iavf_config_irq_map(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_irq_map_info *map_info;
- struct virtchnl_vector_map *vecmap;
- struct iavf_cmd_info args;
- int len, i, err;
+ struct {
+ struct virtchnl_irq_map_info map_info;
+ struct virtchnl_vector_map vecmap[IAVF_MAX_NUM_QUEUES_DFLT];
+ } map_req = {0};
+ struct virtchnl_irq_map_info *map_info = &map_req.map_info;
+ struct iavf_cmd_info args = {0};
+ int i, err, max_vmi = -1;
- len = sizeof(struct virtchnl_irq_map_info) +
- sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+ if (adapter->dev_data->nb_rx_queues > IAVF_MAX_NUM_QUEUES_DFLT) {
+ PMD_DRV_LOG(ERR, "number of queues (%u) exceeds the max supported (%u)",
+ adapter->dev_data->nb_rx_queues, IAVF_MAX_NUM_QUEUES_DFLT);
+ return -EINVAL;
+ }
- map_info = rte_zmalloc("map_info", len, 0);
- if (!map_info)
- return -ENOMEM;
-
- map_info->num_vectors = vf->nb_msix;
for (i = 0; i < adapter->dev_data->nb_rx_queues; i++) {
- vecmap =
- &map_info->vecmap[vf->qv_map[i].vector_id - vf->msix_base];
+ struct virtchnl_vector_map *vecmap;
+ /* always 0 for 1 MSIX, never bigger than rxq for multi MSIX */
+ uint16_t vmi = vf->qv_map[i].vector_id - vf->msix_base;
+
+ /* can't happen but avoid static analysis warnings */
+ if (vmi >= IAVF_MAX_NUM_QUEUES_DFLT) {
+ PMD_DRV_LOG(ERR, "vector id (%u) exceeds the max supported (%u)",
+ vf->qv_map[i].vector_id,
+ vf->msix_base + IAVF_MAX_NUM_QUEUES_DFLT - 1);
+ return -EINVAL;
+ }
+
+ vecmap = &map_info->vecmap[vmi];
vecmap->vsi_id = vf->vsi_res->vsi_id;
vecmap->rxitr_idx = IAVF_ITR_INDEX_DEFAULT;
vecmap->vector_id = vf->qv_map[i].vector_id;
vecmap->txq_map = 0;
vecmap->rxq_map |= 1 << vf->qv_map[i].queue_id;
+
+ /* MSIX vectors round robin so look for max */
+ if (vmi > max_vmi) {
+ map_info->num_vectors++;
+ max_vmi = vmi;
+ }
}
args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
args.in_args = (u8 *)map_info;
- args.in_args_size = len;
+ args.in_args_size = sizeof(map_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
- rte_free(map_info);
return err;
}
-int
-iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
- uint16_t index)
+static int
+iavf_config_irq_map_lv_chunk(struct iavf_adapter *adapter,
+ uint16_t chunk_sz,
+ uint16_t chunk_start)
{
+ struct {
+ struct virtchnl_queue_vector_maps map_info;
+ struct virtchnl_queue_vector qv_maps[IAVF_CFG_Q_NUM_PER_BUF];
+ } chunk_req = {0};
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_queue_vector_maps *map_info;
- struct virtchnl_queue_vector *qv_maps;
- struct iavf_cmd_info args;
- int len, i, err;
- int count = 0;
+ struct iavf_cmd_info args = {0};
+ struct virtchnl_queue_vector_maps *map_info = &chunk_req.map_info;
+ struct virtchnl_queue_vector *qv_maps = chunk_req.qv_maps;
+ uint16_t i;
- len = sizeof(struct virtchnl_queue_vector_maps) +
- sizeof(struct virtchnl_queue_vector) * (num - 1);
-
- map_info = rte_zmalloc("map_info", len, 0);
- if (!map_info)
- return -ENOMEM;
+ if (chunk_sz > IAVF_CFG_Q_NUM_PER_BUF)
+ return -EINVAL;
map_info->vport_id = vf->vsi_res->vsi_id;
- map_info->num_qv_maps = num;
- for (i = index; i < index + map_info->num_qv_maps; i++) {
- qv_maps = &map_info->qv_maps[count++];
+ map_info->num_qv_maps = chunk_sz;
+ for (i = 0; i < chunk_sz; i++) {
+ qv_maps = &map_info->qv_maps[i];
qv_maps->itr_idx = VIRTCHNL_ITR_IDX_0;
qv_maps->queue_type = VIRTCHNL_QUEUE_TYPE_RX;
- qv_maps->queue_id = vf->qv_map[i].queue_id;
- qv_maps->vector_id = vf->qv_map[i].vector_id;
+ qv_maps->queue_id = vf->qv_map[chunk_start + i].queue_id;
+ qv_maps->vector_id = vf->qv_map[chunk_start + i].vector_id;
}
args.ops = VIRTCHNL_OP_MAP_QUEUE_VECTOR;
args.in_args = (u8 *)map_info;
- args.in_args_size = len;
+ args.in_args_size = sizeof(chunk_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
- err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
- if (err)
- PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
- rte_free(map_info);
- return err;
+ return iavf_execute_vf_cmd_safe(adapter, &args, 0);
+}
+
+int
+iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num)
+{
+ uint16_t c;
+ int err;
+
+ for (c = 0; c < num; c += IAVF_CFG_Q_NUM_PER_BUF) {
+ uint16_t chunk_sz = RTE_MIN(num - c, IAVF_CFG_Q_NUM_PER_BUF);
+ err = iavf_config_irq_map_lv_chunk(adapter, chunk_sz, c);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to configure irq map chunk [%u, %u)",
+ c, c + chunk_sz);
+ return err;
+ }
+ }
+ return 0;
}
void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 24/27] net/ice: avoid rte malloc in RSS RETA operations
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (22 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
` (2 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This memory does
not need to be stored in hugepage memory, so replace it with stack
allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
drivers/net/intel/ice/ice_ethdev.c | 29 ++++++--------------------
2 files changed, 8 insertions(+), 25 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index abd7875e7b..388495d69c 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -1336,7 +1336,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1356,7 +1356,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = ice_dcf_configure_rss_lut(hw);
if (ret) /* revert back */
rte_memcpy(hw->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_ethdev.c b/drivers/net/intel/ice/ice_ethdev.c
index 41474b7002..0d6b030536 100644
--- a/drivers/net/intel/ice/ice_ethdev.c
+++ b/drivers/net/intel/ice/ice_ethdev.c
@@ -5564,7 +5564,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K] = {0};
int ret;
if (reta_size != ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128 &&
@@ -5581,14 +5581,9 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
/* It MUST use the current LUT size to get the RSS lookup table,
* otherwise if will fail with -100 error code.
*/
- lut = rte_zmalloc(NULL, RTE_MAX(reta_size, lut_size), 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
ret = ice_get_rss_lut(pf->main_vsi, lut, lut_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
@@ -5604,10 +5599,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
pf->hash_lut_size = reta_size;
}
-out:
- rte_free(lut);
-
- return ret;
+ return 0;
}
static int
@@ -5618,7 +5610,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K] = {0};
int ret;
if (reta_size != lut_size) {
@@ -5630,15 +5622,9 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc(NULL, reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
-
ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
@@ -5647,10 +5633,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
reta_conf[idx].reta[shift] = lut[i];
}
-out:
- rte_free(lut);
-
- return ret;
+ return 0;
}
static int
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 25/27] net/ice: avoid rte malloc in MAC address operations
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (23 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This memory does not need
to be stored in hugepage memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 388495d69c..0d3599d7d0 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -926,19 +926,14 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
struct rte_ether_addr *mc_addrs,
uint32_t mc_addrs_num, bool add)
{
- struct virtchnl_ether_addr_list *list;
- struct dcf_virtchnl_cmd args;
+ struct {
+ struct virtchnl_ether_addr_list list;
+ struct virtchnl_ether_addr addr[DCF_NUM_MACADDR_MAX];
+ } list_req = {0};
+ struct virtchnl_ether_addr_list *list = &list_req.list;
+ struct dcf_virtchnl_cmd args = {0};
uint32_t i;
- int len, err = 0;
-
- len = sizeof(struct virtchnl_ether_addr_list);
- len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
-
- list = rte_zmalloc(NULL, len, 0);
- if (!list) {
- PMD_DRV_LOG(ERR, "fail to allocate memory");
- return -ENOMEM;
- }
+ int err = 0;
for (i = 0; i < mc_addrs_num; i++) {
memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
@@ -953,13 +948,12 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
VIRTCHNL_OP_DEL_ETH_ADDR;
args.req_msg = (uint8_t *)list;
- args.req_msglen = len;
+ args.req_msglen = sizeof(list_req);
err = ice_dcf_execute_virtchnl_cmd(hw, &args);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 26/27] net/ice: avoid rte malloc in raw pattern parsing
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (24 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when parsing raw flow patterns, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_fdir_filter.c | 14 +++++++-------
drivers/net/intel/ice/ice_hash.c | 10 +++++-----
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 1279823b12..0b92b9ab38 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -1970,13 +1970,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
pkt_len)
return -rte_errno;
- tmp_spec = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_spec = calloc(1, pkt_len / 2);
if (!tmp_spec)
return -rte_errno;
- tmp_mask = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_mask = calloc(1, pkt_len / 2);
if (!tmp_mask) {
- rte_free(tmp_spec);
+ free(tmp_spec);
return -rte_errno;
}
@@ -2041,13 +2041,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
filter->parser_ena = true;
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
break;
raw_error:
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
return ret_val;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index b20103a452..f9db530504 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -696,13 +696,13 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
pkt_len = spec_len / 2;
- pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+ pkt_buf = calloc(1, pkt_len);
if (!pkt_buf)
return -ENOMEM;
- msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+ msk_buf = calloc(1, pkt_len);
if (!msk_buf) {
- rte_free(pkt_buf);
+ free(pkt_buf);
return -ENOMEM;
}
@@ -753,8 +753,8 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
rte_memcpy(&meta->raw.prof, &prof, sizeof(prof));
free_mem:
- rte_free(pkt_buf);
- rte_free(msk_buf);
+ free(pkt_buf);
+ free(msk_buf);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v7 27/27] net/ice: avoid rte malloc in flow pattern match
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
` (25 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
@ 2026-02-20 10:14 ` Anatoly Burakov
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-20 10:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when allocating buffers for pattern match items and flow item
storage, we are using rte_zmalloc followed by immediate rte_free. This
memory does not need to be stored in hugepage memory, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_acl_filter.c | 3 ++-
drivers/net/intel/ice/ice_fdir_filter.c | 5 +++--
drivers/net/intel/ice/ice_generic_flow.c | 15 +++++++--------
drivers/net/intel/ice/ice_hash.c | 3 ++-
drivers/net/intel/ice/ice_switch_filter.c | 5 +++--
5 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c
index 38e30a4f62..6754a40044 100644
--- a/drivers/net/intel/ice/ice_acl_filter.c
+++ b/drivers/net/intel/ice/ice_acl_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1009,7 +1010,7 @@ ice_acl_parse(struct ice_adapter *ad,
*meta = filter;
error:
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 0b92b9ab38..93ab803b44 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -3,6 +3,7 @@
*/
#include <stdio.h>
+#include <stdlib.h>
#include <rte_flow.h>
#include <rte_hash.h>
#include <rte_hash_crc.h>
@@ -2845,11 +2846,11 @@ ice_fdir_parse(struct ice_adapter *ad,
rte_free(filter->pkt_buf);
}
- rte_free(item);
+ free(item);
return ret;
error:
rte_free(filter->pkt_buf);
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c
index 644958cccf..62f0c334a1 100644
--- a/drivers/net/intel/ice/ice_generic_flow.c
+++ b/drivers/net/intel/ice/ice_generic_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -2313,19 +2314,17 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
}
item_num++;
- items = rte_zmalloc("ice_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
+ items = calloc(item_num, sizeof(struct rte_flow_item));
if (!items) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
NULL, "No memory for PMD internal items.");
return NULL;
}
- pattern_match_item = rte_zmalloc("ice_pattern_match_item",
- sizeof(struct ice_pattern_match_item), 0);
+ pattern_match_item = calloc(1, sizeof(struct ice_pattern_match_item));
if (!pattern_match_item) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE,
NULL, "Failed to allocate memory.");
- rte_free(items);
+ free(items);
return NULL;
}
@@ -2344,7 +2343,7 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
pattern_match_item->pattern_list =
array[i].pattern_list;
pattern_match_item->meta = array[i].meta;
- rte_free(items);
+ free(items);
return pattern_match_item;
}
}
@@ -2352,8 +2351,8 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
unsupported:
rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- rte_free(items);
- rte_free(pattern_match_item);
+ free(items);
+ free(pattern_match_item);
return NULL;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index f9db530504..77829e607b 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -1236,7 +1237,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad,
*meta = rss_meta_ptr;
else
rte_free(rss_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c
index 28bc775a2c..b25e5eaad3 100644
--- a/drivers/net/intel/ice/ice_switch_filter.c
+++ b/drivers/net/intel/ice/ice_switch_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1877,14 +1878,14 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
rte_free(sw_meta_ptr);
}
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return 0;
error:
rte_free(list);
rte_free(sw_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return -rte_errno;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (17 preceding siblings ...)
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
` (25 more replies)
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
19 siblings, 26 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev
This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
IXGBE:
- Remove unnecessary macros and #ifdef's
- Disentangle unrelated flow API code paths
I40E:
- Get rid of global variables and unnecessary allocations
- Reduce code duplication around default RSS keys
- Use more appropriate integer types and definitions
IAVF:
- Remove dead code
- Remove unnecessary allocations
- Separate RSS uninit from hash flow parser uninit
ICE:
- Remove unnecessary allocations
This is done in preparation for further rework.
Note that this patchset depends on driver bug fix patchset [1] (already integrated into next-net-intel)
as well as an IPsec struct fix [2].
[1] https://patches.dpdk.org/project/dpdk/list/?series=37350
[2] https://patches.dpdk.org/project/dpdk/patch/c87355f75826ec90a506dc8d4548e3f6af2b7e93.1771581658.git.anatoly.burakov@intel.com/
v1 -> v2:
- Added more cleanups around rte_malloc usage
v2 -> v3:
- Reworded some commit messages
- Added a new patch for ICE
- Rebased on latest bug fix patches
v3 -> v4:
- Rebased on latest bugfix patchset
v4 -> v5:
- Adjusted typing for queue size
- Fixed missing zero initializations for stack allocations
v5 -> v6:
- Addressed feedback for v3, v4, and v5
- Changed more allocations to be stack based
- Reworked queue and IRQ map related i40e patches for better logic
v6 -> v7:
- Fixed offset logic in IRQ map
- (Hopefully) fixed zero-sized array initialization error for MSVC
v7 -> v8:
- Reverted to using calloc for IPsec patch because MSVC doesn't like
stack-initialized structs with zero-sized array members
- Merged two IPsec patches together
Anatoly Burakov (26):
net/ixgbe: remove MAC type check macros
net/ixgbe: remove security-related ifdefery
net/ixgbe: split security and ntuple filters
net/i40e: get rid of global filter variables
net/i40e: make default RSS key global
net/i40e: use unsigned types for queue comparisons
net/i40e: use proper flex len define
net/i40e: remove global pattern variable
net/i40e: avoid rte malloc in tunnel set
net/i40e: avoid rte malloc in RSS RETA operations
net/i40e: avoid rte malloc in MAC/VLAN filtering
net/i40e: avoid rte malloc in VF resource queries
net/i40e: avoid rte malloc in adminq operations
net/i40e: avoid rte malloc in DDP package handling
net/i40e: avoid rte malloc in DDP ptype handling
net/iavf: remove remnants of pipeline mode
net/iavf: decouple hash uninit from parser uninit
net/iavf: avoid rte malloc in VF mailbox for IPsec
net/iavf: avoid rte malloc in RSS configuration
net/iavf: avoid rte malloc in MAC address operations
net/iavf: avoid rte malloc in queue operations
net/iavf: avoid rte malloc in irq map config
net/ice: avoid rte malloc in RSS RETA operations
net/ice: avoid rte malloc in MAC address operations
net/ice: avoid rte malloc in raw pattern parsing
net/ice: avoid rte malloc in flow pattern match
drivers/net/intel/i40e/i40e_ethdev.c | 370 +++++++----------
drivers/net/intel/i40e/i40e_ethdev.h | 26 +-
drivers/net/intel/i40e/i40e_flow.c | 147 ++++---
drivers/net/intel/i40e/i40e_hash.c | 27 +-
drivers/net/intel/i40e/i40e_hash.h | 3 +
drivers/net/intel/i40e/i40e_pf.c | 26 +-
drivers/net/intel/i40e/rte_pmd_i40e.c | 60 +--
drivers/net/intel/iavf/iavf.h | 7 +-
drivers/net/intel/iavf/iavf_ethdev.c | 37 +-
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 -
drivers/net/intel/iavf/iavf_hash.c | 13 +-
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 61 ++-
drivers/net/intel/iavf/iavf_vchnl.c | 413 ++++++++++---------
drivers/net/intel/ice/ice_acl_filter.c | 3 +-
drivers/net/intel/ice/ice_dcf_ethdev.c | 26 +-
drivers/net/intel/ice/ice_ethdev.c | 29 +-
drivers/net/intel/ice/ice_fdir_filter.c | 19 +-
drivers/net/intel/ice/ice_generic_flow.c | 15 +-
drivers/net/intel/ice/ice_hash.c | 13 +-
drivers/net/intel/ice/ice_switch_filter.c | 5 +-
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 -
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 16 -
drivers/net/intel/ixgbe/ixgbe_flow.c | 228 ++++++----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 -
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -
28 files changed, 720 insertions(+), 883 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v8 01/26] net/ixgbe: remove MAC type check macros
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 02/26] net/ixgbe: remove security-related ifdefery Anatoly Burakov
` (24 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The macros used were not informative and did not add any value beyond code
golf, so remove them and make MAC type checks explicit.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
2 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 5dbd659941..7dc02a472b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -137,18 +137,6 @@
#define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
#define IXGBE_MAX_L2_TN_FILTER_NUM 128
-#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
- return -ENOTSUP;\
-} while (0)
-
-#define MAC_TYPE_FILTER_SUP(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
- (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
- (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
- return -ENOTSUP;\
-} while (0)
-
/* Link speed for X550 auto negotiation */
#define IXGBE_LINK_SPEED_X550_AUTONEG (IXGBE_LINK_SPEED_100_FULL | \
IXGBE_LINK_SPEED_1GB_FULL | \
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 6a7edc6377..c8d6237f27 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -654,7 +654,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP_EXT(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540)
+ return -ENOTSUP;
ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
@@ -894,7 +896,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_ethertype_filter(attr, pattern,
actions, filter, error);
@@ -1183,7 +1191,13 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_syn_filter(attr, pattern,
actions, filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 02/26] net/ixgbe: remove security-related ifdefery
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 03/26] net/ixgbe: split security and ntuple filters Anatoly Burakov
` (23 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The security library is specified as explicit dependency for ixgbe, so
there is no more need to gate features behind #ifdef blocks that depend
on presence of this library.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 ------
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ---
drivers/net/intel/ixgbe/ixgbe_flow.c | 6 -----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --------------------
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 --
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -----
6 files changed, 52 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 11500a923c..57d929cf2c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -32,9 +32,7 @@
#include <rte_random.h>
#include <dev_driver.h>
#include <rte_hash_crc.h>
-#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
-#endif
#include <rte_os_shim.h>
#include "ixgbe_logs.h"
@@ -1177,11 +1175,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ixgbe_swfw_lock_reset(hw);
-#ifdef RTE_LIB_SECURITY
/* Initialize security_ctx only for primary process*/
if (ixgbe_ipsec_ctx_create(eth_dev))
return -ENOMEM;
-#endif
/* Initialize DCB configuration*/
memset(dcb_config, 0, sizeof(struct ixgbe_dcb_config));
@@ -1362,10 +1358,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
rte_free(eth_dev->data->hash_mac_addrs);
eth_dev->data->hash_mac_addrs = NULL;
err_exit:
-#ifdef RTE_LIB_SECURITY
rte_free(eth_dev->security_ctx);
eth_dev->security_ctx = NULL;
-#endif
return ret;
}
@@ -3148,10 +3142,8 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(dev);
-#ifdef RTE_LIB_SECURITY
rte_free(dev->security_ctx);
dev->security_ctx = NULL;
-#endif
return ret;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 7dc02a472b..32d7b98ed1 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -14,9 +14,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
-#ifdef RTE_LIB_SECURITY
#include "ixgbe_ipsec.h"
-#endif
#include <rte_flow.h>
#include <rte_time.h>
#include <rte_hash.h>
@@ -480,9 +478,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-#ifdef RTE_LIB_SECURITY
struct ixgbe_ipsec ipsec;
-#endif
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index c8d6237f27..491e8bccc5 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,7 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
-#ifdef RTE_LIB_SECURITY
/**
* Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
*/
@@ -282,7 +281,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
}
-#endif
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -663,11 +661,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (filter->proto == IPPROTO_ESP)
return 0;
-#endif
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
@@ -3107,7 +3103,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (ntuple_filter.proto == IPPROTO_ESP) {
if (ret != 0)
@@ -3115,7 +3110,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
flow->is_security = true;
return flow;
}
-#endif
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
index a6454cd1fe..3be0f0492a 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
@@ -460,7 +460,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
-#ifdef RTE_LIB_SECURITY
if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
union ixgbe_crypto_tx_desc_md *md =
(union ixgbe_crypto_tx_desc_md *)mdata;
@@ -474,7 +473,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
tx_offload_mask.sa_idx |= ~0;
tx_offload_mask.sec_pad_len |= ~0;
}
-#endif
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -631,9 +629,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec;
-#endif
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -661,9 +657,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
-#ifdef RTE_LIB_SECURITY
use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
-#endif
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -675,7 +669,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
-#ifdef RTE_LIB_SECURITY
if (use_ipsec) {
union ixgbe_crypto_tx_desc_md *ipsec_mdata =
(union ixgbe_crypto_tx_desc_md *)
@@ -683,7 +676,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.sa_idx = ipsec_mdata->sa_idx;
tx_offload.sec_pad_len = ipsec_mdata->pad_len;
}
-#endif
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -871,10 +863,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
-#ifdef RTE_LIB_SECURITY
if (use_ipsec)
olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
-#endif
m_seg = tx_pkt;
do {
@@ -1505,13 +1495,11 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
}
-#ifdef RTE_LIB_SECURITY
if (rx_status & IXGBE_RXD_STAT_SECP) {
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
}
-#endif
return pkt_flags;
}
@@ -2472,9 +2460,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST) {
if (txq->tx_rs_thresh <= IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
@@ -2629,9 +2615,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
(txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = rte_eth_tx_pkt_prepare_dummy;
@@ -2692,10 +2676,8 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
-#endif
return tx_offload_capa;
}
@@ -2873,10 +2855,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
-#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
RTE_ETH_TX_OFFLOAD_SECURITY);
-#endif
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -3170,10 +3150,8 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
-#endif
return offloads;
}
@@ -5101,10 +5079,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ci_rx_queue *rxq = dev->data->rx_queues[i];
rxq->vector_rx = rx_using_sse;
-#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY);
-#endif
}
}
@@ -5610,7 +5586,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
ixgbe_setup_loopback_link_x540_x550(hw, true);
}
-#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
@@ -5623,7 +5598,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
return ret;
}
}
-#endif
return 0;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.h b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
index 7950e56ee4..33023a3580 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
@@ -99,11 +99,9 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
-#ifdef RTE_LIB_SECURITY
/* inline ipsec related*/
uint64_t sa_idx:8; /**< TX SA database entry index */
uint64_t sec_pad_len:4; /**< padding length */
-#endif
};
};
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
index dca3a20ca0..3f37038e5c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -21,7 +21,6 @@ ixgbe_rxq_rearm(struct ci_rx_queue *rxq)
ci_rxq_rearm(rxq, CI_RX_VEC_LEVEL_SSE);
}
-#ifdef RTE_LIB_SECURITY
static inline void
desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
@@ -56,7 +55,6 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
*rearm2 = _mm_extract_epi32(rearm, 2);
*rearm3 = _mm_extract_epi32(rearm, 3);
}
-#endif
static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
@@ -265,9 +263,7 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ci_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec = rxq->using_ipsec;
-#endif
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -444,10 +440,8 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
desc_to_olflags_v(descs, mbuf_init, vlan_flags, udp_p_flag,
&rx_pkts[pos]);
-#ifdef RTE_LIB_SECURITY
if (unlikely(use_ipsec))
desc_to_olflags_v_ipsec(descs, &rx_pkts[pos]);
-#endif
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 03/26] net/ixgbe: split security and ntuple filters
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 02/26] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 04/26] net/i40e: get rid of global filter variables Anatoly Burakov
` (22 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
These filters are mashed together even though they almost do not share any
code at all between each other. Separate security filter from ntuple filter
and parse it separately.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 202 ++++++++++++++++-----------
1 file changed, 122 insertions(+), 80 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 491e8bccc5..01cd4f9bde 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,74 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- act = next_no_void_action(actions, NULL);
- if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- const void *conf = act->conf;
- const struct rte_flow_action_security *sec_act;
- struct rte_security_session *session;
- struct ip_spec spec;
-
- if (conf == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- act, "NULL security conf.");
- return -rte_errno;
- }
- /* check if the next not void item is END */
- act = next_no_void_action(actions, act);
- if (act->type != RTE_FLOW_ACTION_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "Not supported action.");
- return -rte_errno;
- }
-
- /* get the IP pattern*/
- item = next_no_void_pattern(pattern, NULL);
- while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- if (item->last ||
- item->type == RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "IP pattern missing.");
- return -rte_errno;
- }
- item = next_no_void_pattern(pattern, item);
- }
- if (item->spec == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
- "NULL IP pattern.");
- return -rte_errno;
- }
-
- filter->proto = IPPROTO_ESP;
- sec_act = (const struct rte_flow_action_security *)conf;
- spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
- if (spec.is_ipv6) {
- const struct rte_flow_item_ipv6 *ipv6 = item->spec;
- spec.spec.ipv6 = *ipv6;
- } else {
- const struct rte_flow_item_ipv4 *ipv4 = item->spec;
- spec.spec.ipv4 = *ipv4;
- }
-
- /*
- * we get pointer to security session from security action,
- * which is const. however, we do need to act on the session, so
- * either we do some kind of pointer based lookup to get session
- * pointer internally (which quickly gets unwieldy for lots of
- * flows case), or we simply cast away constness.
- */
- session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
- return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
- }
-
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -640,6 +572,112 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return 0;
}
+static int
+ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct rte_flow_action_security *security;
+ struct rte_security_session *session;
+ const struct rte_flow_item *item;
+ const struct rte_flow_action *act;
+ struct ip_spec spec;
+ int ret;
+
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error,
+ EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "NULL pattern.");
+ return -rte_errno;
+ }
+ if (actions == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "NULL action.");
+ return -rte_errno;
+ }
+ if (attr == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR,
+ NULL, "NULL attribute.");
+ return -rte_errno;
+ }
+
+ /* check if next non-void action is security */
+ act = next_no_void_action(actions, NULL);
+ if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+ security = act->conf;
+ if (security == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "NULL security action config.");
+ }
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+ if (item->spec == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
+ "NULL IP pattern.");
+ return -rte_errno;
+ }
+ spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
+ if (spec.is_ipv6) {
+ const struct rte_flow_item_ipv6 *ipv6 = item->spec;
+ spec.spec.ipv6 = *ipv6;
+ } else {
+ const struct rte_flow_item_ipv4 *ipv4 = item->spec;
+ spec.spec.ipv4 = *ipv4;
+ }
+
+ /*
+ * we get pointer to security session from security action, which is
+ * const. however, we do need to act on the session, so either we do
+ * some kind of pointer based lookup to get session pointer internally
+ * (which quickly gets unwieldy for lots of flows case), or we simply
+ * cast away constness. the latter path was chosen.
+ */
+ session = RTE_CAST_PTR(struct rte_security_session *, security->security_session);
+ ret = ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
+ if (ret) {
+ rte_flow_error_set(error, -ret,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "Failed to add security session.");
+ return -rte_errno;
+ }
+ return 0;
+}
+
/* a specific function for ixgbe because the flags is specific */
static int
ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
@@ -661,10 +699,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- /* ESP flow not really a flow*/
- if (filter->proto == IPPROTO_ESP)
- return 0;
-
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -3099,18 +3133,19 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
-
- /* ESP flow not really a flow*/
- if (ntuple_filter.proto == IPPROTO_ESP) {
- if (ret != 0)
- goto out;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret) {
flow->is_security = true;
return flow;
}
+ memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
+ actions, &ntuple_filter, error);
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
@@ -3334,6 +3369,13 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
+ return 0;
+
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 04/26] net/i40e: get rid of global filter variables
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (2 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 03/26] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 05/26] net/i40e: make default RSS key global Anatoly Burakov
` (21 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, i40e driver relies on global state to work around the fact that
`rte_flow_validate()` is being called directly from `rte_flow_create()`,
and it not being possible to pass state between two functions. Fix that by
making a small wrapper around validation that will create a dummy context.
Additionally, tunnel filter doesn't appear to be used by anything and so is
omitted from the structure.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 16 ++--
drivers/net/intel/i40e/i40e_flow.c | 117 ++++++++++++++-------------
2 files changed, 68 insertions(+), 65 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index cab6d7e9dc..0de036f2d9 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1304,12 +1304,14 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
-union i40e_filter_t {
- struct rte_eth_ethertype_filter ethertype_filter;
- struct i40e_fdir_filter_conf fdir_filter;
- struct rte_eth_tunnel_filter_conf tunnel_filter;
- struct i40e_tunnel_filter_conf consistent_tunnel_filter;
- struct i40e_rte_flow_rss_conf rss_conf;
+struct i40e_filter_ctx {
+ union {
+ struct rte_eth_ethertype_filter ethertype_filter;
+ struct i40e_fdir_filter_conf fdir_filter;
+ struct i40e_tunnel_filter_conf consistent_tunnel_filter;
+ struct i40e_rte_flow_rss_conf rss_conf;
+ };
+ enum rte_filter_type type;
};
typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
@@ -1317,7 +1319,7 @@ typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
struct i40e_valid_pattern {
enum rte_flow_item_type *items;
parse_filter_t parse_filter;
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2374b9bbca..e611de0c06 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -80,37 +80,37 @@ static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
struct i40e_ethertype_filter *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
@@ -124,7 +124,7 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int
i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
@@ -136,7 +136,7 @@ static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
const struct rte_flow_ops i40e_flow_ops = {
.validate = i40e_flow_validate,
.create = i40e_flow_create,
@@ -145,8 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-static union i40e_filter_t cons_filter;
-static enum rte_filter_type cons_filter_type = RTE_ETH_FILTER_NONE;
/* internal pattern w/o VOID items */
struct rte_flow_item g_items[32];
@@ -1454,10 +1452,9 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct rte_eth_ethertype_filter *ethertype_filter =
- &filter->ethertype_filter;
+ struct rte_eth_ethertype_filter *ethertype_filter = &filter->ethertype_filter;
int ret;
ret = i40e_flow_parse_ethertype_pattern(dev, pattern, error,
@@ -1474,7 +1471,7 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_ETHERTYPE;
+ filter->type = RTE_ETH_FILTER_ETHERTYPE;
return ret;
}
@@ -2549,7 +2546,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
struct i40e_fdir_filter_conf *fdir_filter = &filter->fdir_filter;
int ret;
@@ -2566,7 +2563,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_FDIR;
+ filter->type = RTE_ETH_FILTER_FDIR;
return 0;
}
@@ -2834,10 +2831,9 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_l4_pattern(pattern, error, tunnel_filter);
@@ -2852,7 +2848,7 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3086,10 +3082,9 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_vxlan_pattern(dev, pattern,
@@ -3105,7 +3100,7 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3338,10 +3333,9 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_nvgre_pattern(dev, pattern,
@@ -3357,7 +3351,7 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3495,10 +3489,9 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_mpls_pattern(dev, pattern,
@@ -3514,7 +3507,7 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3648,10 +3641,9 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_gtp_pattern(dev, pattern,
@@ -3667,7 +3659,7 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3766,10 +3758,9 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_qinq_pattern(dev, pattern,
@@ -3785,16 +3776,17 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
static int
-i40e_flow_validate(struct rte_eth_dev *dev,
+i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
+ struct i40e_filter_ctx *filter_ctx,
struct rte_flow_error *error)
{
struct rte_flow_item *items; /* internal pattern w/o VOID items */
@@ -3823,7 +3815,6 @@ i40e_flow_validate(struct rte_eth_dev *dev,
NULL, "NULL attribute.");
return -rte_errno;
}
- memset(&cons_filter, 0, sizeof(cons_filter));
/* Get the non-void item of action */
while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID)
@@ -3834,9 +3825,8 @@ i40e_flow_validate(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_HASH;
- return i40e_hash_parse(dev, pattern, actions + i,
- &cons_filter.rss_conf, error);
+ filter_ctx->type = RTE_ETH_FILTER_HASH;
+ return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error);
}
i = 0;
@@ -3878,8 +3868,7 @@ i40e_flow_validate(struct rte_eth_dev *dev,
}
if (parse_filter)
- ret = parse_filter(dev, attr, items, actions,
- error, &cons_filter);
+ ret = parse_filter(dev, attr, items, actions, error, filter_ctx);
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
@@ -3890,6 +3879,19 @@ i40e_flow_validate(struct rte_eth_dev *dev,
return ret;
}
+static int
+i40e_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ /* creates dummy context */
+ struct i40e_filter_ctx filter_ctx = {0};
+
+ return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
+}
+
static struct rte_flow *
i40e_flow_create(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -3898,15 +3900,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_filter_ctx filter_ctx = {0};
struct rte_flow *flow = NULL;
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
- ret = i40e_flow_validate(dev, attr, pattern, actions, error);
+ ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
if (ret < 0)
return NULL;
- if (cons_filter_type == RTE_ETH_FILTER_FDIR) {
+ if (filter_ctx.type == RTE_ETH_FILTER_FDIR) {
/* if this is the first time we're creating an fdir flow */
if (pf->fdir.fdir_vsi == NULL) {
ret = i40e_fdir_setup(pf);
@@ -3947,18 +3950,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
}
- switch (cons_filter_type) {
+ switch (filter_ctx.type) {
case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_set(pf,
- &cons_filter.ethertype_filter, 1);
+ ret = i40e_ethertype_filter_set(pf, &filter_ctx.ethertype_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->ethertype.ethertype_list,
i40e_ethertype_filter_list);
break;
case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev,
- &cons_filter.fdir_filter, 1);
+ ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
@@ -3966,14 +3967,14 @@ i40e_flow_create(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_dev_consistent_tunnel_filter_set(pf,
- &cons_filter.consistent_tunnel_filter, 1);
+ &filter_ctx.consistent_tunnel_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->tunnel.tunnel_list,
i40e_tunnel_filter_list);
break;
case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_create(pf, &cons_filter.rss_conf);
+ ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->rss_config_list,
@@ -3983,7 +3984,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
goto free_flow;
}
- flow->filter_type = cons_filter_type;
+ flow->filter_type = filter_ctx.type;
TAILQ_INSERT_TAIL(&pf->flow_list, flow, node);
return flow;
@@ -3992,7 +3993,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
"Failed to create flow.");
- if (cons_filter_type != RTE_ETH_FILTER_FDIR)
+ if (filter_ctx.type != RTE_ETH_FILTER_FDIR)
rte_free(flow);
else
i40e_fdir_entry_pool_put(fdir_info, flow);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 05/26] net/i40e: make default RSS key global
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (3 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 04/26] net/i40e: get rid of global filter variables Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 06/26] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
` (20 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, there are multiple places where we need a default RSS key, but
in each of those places we define it as a local variable. Make it global
constant, and adjust all callers to use the global constant. When dealing
with adminq, we cannot send down the constant because adminq commands do
not guarantee const-ness, so copy RSS key into a local buffer before
sending it down to hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 22 ++++++++++------------
drivers/net/intel/i40e/i40e_hash.c | 23 +++++++++++++++++------
drivers/net/intel/i40e/i40e_hash.h | 3 +++
3 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index b891215191..9ab8c35621 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9080,23 +9080,21 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
int
i40e_pf_reset_rss_key(struct i40e_pf *pf)
{
- const uint8_t key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- uint8_t *rss_key;
+ uint8_t key_buf[I40E_RSS_KEY_LEN];
+ const uint8_t *rss_key;
/* Reset key */
rss_key = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key;
- if (!rss_key ||
- pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < key_len) {
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+ if (!rss_key || pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < sizeof(key_buf))
+ rss_key = i40e_rss_key_default;
- rss_key = (uint8_t *)rss_key_default;
- }
+ /*
+ * adminq does not guarantee const-ness of RSS key once a command is sent down, so make a
+ * local copy.
+ */
+ memcpy(&key_buf, rss_key, sizeof(key_buf));
- return i40e_set_rss_key(pf->main_vsi, rss_key, key_len);
+ return i40e_set_rss_key(pf->main_vsi, key_buf, sizeof(key_buf));
}
static int
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index 3149682197..f20b40e7d0 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -233,6 +233,22 @@ struct i40e_hash_match_pattern {
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+const uint8_t i40e_rss_key_default[] = {
+ 0x44, 0x39, 0x79, 0x6b,
+ 0xb5, 0x4c, 0x50, 0x23,
+ 0xb6, 0x75, 0xea, 0x5b,
+ 0x12, 0x4f, 0x9f, 0x30,
+ 0xb8, 0xa2, 0xc0, 0x3d,
+ 0xdf, 0xdc, 0x4d, 0x02,
+ 0xa0, 0x8c, 0x9b, 0x33,
+ 0x4a, 0xf6, 0x4a, 0x4c,
+ 0x05, 0xc6, 0xfa, 0x34,
+ 0x39, 0x58, 0xd8, 0x55,
+ 0x7d, 0x99, 0x58, 0x3a,
+ 0xe1, 0x38, 0xc9, 0x2e,
+ 0x81, 0x15, 0x03, 0x66
+};
+
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
*/
@@ -910,17 +926,12 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act,
const uint8_t *key = rss_act->key;
if (!key || rss_act->key_len != sizeof(rss_conf->key)) {
- const uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
-
if (rss_act->key_len != sizeof(rss_conf->key))
PMD_DRV_LOG(WARNING,
"RSS key length invalid, must be %u bytes, now set key to default",
(uint32_t)sizeof(rss_conf->key));
- memcpy(rss_conf->key, rss_key_default, sizeof(rss_conf->key));
+ memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key));
} else {
memcpy(rss_conf->key, key, sizeof(rss_conf->key));
}
diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h
index ff8c91c030..2513d84565 100644
--- a/drivers/net/intel/i40e/i40e_hash.h
+++ b/drivers/net/intel/i40e/i40e_hash.h
@@ -27,6 +27,9 @@ int i40e_hash_filter_destroy(struct i40e_pf *pf,
const struct i40e_rss_filter *rss_filter);
int i40e_hash_filter_flush(struct i40e_pf *pf);
+#define I40E_RSS_KEY_LEN ((I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t))
+extern const uint8_t i40e_rss_key_default[I40E_RSS_KEY_LEN];
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 06/26] net/i40e: use unsigned types for queue comparisons
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (4 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 05/26] net/i40e: make default RSS key global Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 07/26] net/i40e: use proper flex len define Anatoly Burakov
` (19 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when we compare queue numbers against maximum traffic class
value of 64, we do not use unsigned values, which results in compiler
warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
value. Make it unsigned 16-bit, and adjust callers to use correct types.
As a consequence, `i40e_align_floor` now returns unsigned value as well -
this is correct, because nothing about that function implies signed usage
being a valid use case.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 11 ++++++-----
drivers/net/intel/i40e/i40e_ethdev.h | 8 ++++----
drivers/net/intel/i40e/i40e_hash.c | 4 ++--
3 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 9ab8c35621..27fa789e21 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8976,11 +8976,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
/* Calculate the maximum number of contiguous PF queues that are configured */
-int
+uint16_t
i40e_pf_calc_configured_queues_num(struct i40e_pf *pf)
{
struct rte_eth_dev_data *data = pf->dev_data;
- int i, num;
+ int i;
+ uint16_t num;
struct ci_rx_queue *rxq;
num = 0;
@@ -9056,7 +9057,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
struct i40e_hw *hw = &pf->adapter->hw;
uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
- int num;
+ uint16_t num;
/* If both VMDQ and RSS enabled, not all of PF queues are
* configured. It's necessary to calculate the actual PF
@@ -9072,7 +9073,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
return 0;
for (i = 0; i < hw->func_caps.rss_table_size; i++)
- lut[i] = (uint8_t)(i % (uint32_t)num);
+ lut[i] = (uint8_t)(i % num);
return i40e_set_rss_lut(pf->main_vsi, lut, (uint16_t)i);
}
@@ -10769,7 +10770,7 @@ i40e_vsi_update_queue_mapping(struct i40e_vsi *vsi,
PMD_INIT_LOG(ERR, " number of queues is less that tcs.");
return I40E_ERR_INVALID_QP_ID;
}
- qpnum_per_tc = RTE_MIN(i40e_align_floor(qpnum_per_tc),
+ qpnum_per_tc = RTE_MIN((uint16_t)i40e_align_floor(qpnum_per_tc),
I40E_MAX_Q_PER_TC);
bsf = rte_bsf32(qpnum_per_tc);
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 0de036f2d9..ca6638b32c 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -24,7 +24,7 @@
#define I40E_AQ_LEN 32
#define I40E_AQ_BUF_SZ 4096
/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
-#define I40E_MAX_Q_PER_TC 64
+#define I40E_MAX_Q_PER_TC UINT16_C(64)
#define I40E_NUM_DESC_DEFAULT 512
#define I40E_NUM_DESC_ALIGN 32
#define I40E_BUF_SIZE_MIN 1024
@@ -1456,7 +1456,7 @@ int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
void i40e_flex_payload_reg_set_default(struct i40e_hw *hw);
void i40e_pf_disable_rss(struct i40e_pf *pf);
-int i40e_pf_calc_configured_queues_num(struct i40e_pf *pf);
+uint16_t i40e_pf_calc_configured_queues_num(struct i40e_pf *pf);
int i40e_pf_reset_rss_reta(struct i40e_pf *pf);
int i40e_pf_reset_rss_key(struct i40e_pf *pf);
int i40e_pf_config_rss(struct i40e_pf *pf);
@@ -1517,8 +1517,8 @@ i40e_init_adminq_parameter(struct i40e_hw *hw)
hw->aq.asq_buf_size = I40E_AQ_BUF_SZ;
}
-static inline int
-i40e_align_floor(int n)
+static inline uint32_t
+i40e_align_floor(uint32_t n)
{
if (n == 0)
return 0;
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index f20b40e7d0..5756ebf255 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -949,7 +949,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
struct i40e_pf *pf;
struct i40e_hw *hw;
uint16_t i;
- int max_queue;
+ uint16_t max_queue;
hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (!rss_act->queue_num ||
@@ -971,7 +971,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
for (i = 0; i < rss_act->queue_num; i++) {
- if ((int)rss_act->queue[i] >= max_queue)
+ if (rss_act->queue[i] >= max_queue)
break;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 07/26] net/i40e: use proper flex len define
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (5 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 06/26] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 08/26] net/i40e: remove global pattern variable Anatoly Burakov
` (18 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
For FDIR, we have byte arrays that are supposed to be limited to whatever
the HW supports in terms of flex descriptor matching. However, in the
structure definition, spec and mask bytes are using different array length
defines, and the only reason why this works is because they evaluate to the
same value.
Use the i40e-specific definition instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index ca6638b32c..d57c53f661 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -631,7 +631,7 @@ struct i40e_fdir_flex_pit {
/* A structure used to contain extend input of flow */
struct i40e_fdir_flow_ext {
uint16_t vlan_tci;
- uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
+ uint8_t flexbytes[I40E_FDIR_MAX_FLEX_LEN];
/* It is filled by the flexible payload to match. */
uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN];
uint8_t raw_id;
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 08/26] net/i40e: remove global pattern variable
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (6 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 07/26] net/i40e: use proper flex len define Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 09/26] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
` (17 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
When parsing flow patterns, current code cleans up the pattern list by
removing void flow items, and copies the patterns into an array. That
array, when dealing with under 32 flow items, is allocated on the stack,
but when the pattern is big enough, a new list is dynamically allocated.
This allocated list is a global variable, and is allocated using
rte_zmalloc call which seems like overkill for this use case.
Remove the global variable, and replace the split behavior with
unconditional allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 30 ++++++++++--------------------
1 file changed, 10 insertions(+), 20 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index e611de0c06..2791139e59 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -145,9 +146,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-/* internal pattern w/o VOID items */
-struct rte_flow_item g_items[32];
-
/* Pattern matched ethertype filter */
static enum rte_flow_item_type pattern_ethertype[] = {
RTE_FLOW_ITEM_TYPE_ETH,
@@ -3837,19 +3835,13 @@ i40e_flow_check(struct rte_eth_dev *dev,
i++;
}
item_num++;
-
- if (item_num <= ARRAY_SIZE(g_items)) {
- items = g_items;
- } else {
- items = rte_zmalloc("i40e_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
- if (!items) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL,
- "No memory for PMD internal items.");
- return -ENOMEM;
- }
+ items = calloc(item_num, sizeof(struct rte_flow_item));
+ if (items == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL,
+ "No memory for PMD internal items.");
+ return -ENOMEM;
}
i40e_pattern_skip_void_item(items, pattern);
@@ -3862,8 +3854,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- if (items != g_items)
- rte_free(items);
+ free(items);
return -rte_errno;
}
@@ -3873,8 +3864,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
- if (items != g_items)
- rte_free(items);
+ free(items);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 09/26] net/i40e: avoid rte malloc in tunnel set
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (7 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 08/26] net/i40e: remove global pattern variable Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 10/26] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (16 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when setting tunnel configuration, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory and the allocation size is pretty small, so replace it
with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 124 ++++++++++++---------------
1 file changed, 53 insertions(+), 71 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 27fa789e21..f9e86b82b7 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8509,38 +8509,27 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
struct i40e_pf_vf *vf = NULL;
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_vsi *vsi;
- struct i40e_aqc_cloud_filters_element_bb *cld_filter;
- struct i40e_aqc_cloud_filters_element_bb *pfilter;
+ struct i40e_aqc_cloud_filters_element_bb cld_filter = {0};
struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *tunnel, *node;
+ struct i40e_tunnel_filter *node;
struct i40e_tunnel_filter check_filter; /* Check if filter exists */
uint32_t teid_le;
bool big_buffer = 0;
- cld_filter = rte_zmalloc("tunnel_filter",
- sizeof(struct i40e_aqc_add_rm_cloud_filt_elem_ext),
- 0);
-
- if (cld_filter == NULL) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- return -ENOMEM;
- }
- pfilter = cld_filter;
-
rte_ether_addr_copy(&tunnel_filter->outer_mac,
- (struct rte_ether_addr *)&pfilter->element.outer_mac);
+ (struct rte_ether_addr *)&cld_filter.element.outer_mac);
rte_ether_addr_copy(&tunnel_filter->inner_mac,
- (struct rte_ether_addr *)&pfilter->element.inner_mac);
+ (struct rte_ether_addr *)&cld_filter.element.inner_mac);
- pfilter->element.inner_vlan =
+ cld_filter.element.inner_vlan =
rte_cpu_to_le_16(tunnel_filter->inner_vlan);
if (tunnel_filter->ip_type == I40E_TUNNEL_IPTYPE_IPV4) {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v4.data,
&ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
+ sizeof(cld_filter.element.ipaddr.v4.data));
} else {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
for (i = 0; i < 4; i++) {
@@ -8548,9 +8537,9 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
rte_cpu_to_le_32(rte_be_to_cpu_32(
tunnel_filter->ip_addr.ipv6_addr[i]));
}
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v6.data,
&convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
+ sizeof(cld_filter.element.ipaddr.v6.data));
}
/* check tunneled type */
@@ -8571,11 +8560,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x40;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOUDP;
@@ -8587,11 +8576,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x0;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOGRE;
@@ -8603,11 +8592,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8618,11 +8607,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8639,8 +8628,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
* Big Buffer should be set, see changes in
* i40e_aq_add_cloud_filters
*/
- pfilter->general_fields[0] = tunnel_filter->inner_vlan;
- pfilter->general_fields[1] = tunnel_filter->outer_vlan;
+ cld_filter.general_fields[0] = tunnel_filter->inner_vlan;
+ cld_filter.general_fields[1] = tunnel_filter->outer_vlan;
big_buffer = 1;
break;
case I40E_CLOUD_TYPE_UDP:
@@ -8655,20 +8644,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->sport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
} else {
@@ -8680,20 +8669,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->dport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
}
@@ -8702,48 +8691,46 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
default:
/* Other tunnel types is not supported. */
PMD_DRV_LOG(ERR, "tunnel type is not supported.");
- rte_free(cld_filter);
return -EINVAL;
}
if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoUDP)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoGRE)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPC)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPU)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_QINQ)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_SCTP) {
if (tunnel_filter->l4_port_type == I40E_L4_PORT_TYPE_SRC)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
} else {
val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
- &pfilter->element.flags);
+ &cld_filter.element.flags);
if (val < 0) {
- rte_free(cld_filter);
return -EINVAL;
}
}
- pfilter->element.flags |= rte_cpu_to_le_16(
- I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE |
- ip_type | (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
- pfilter->element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->element.queue_number =
+ cld_filter.element.flags |=
+ rte_cpu_to_le_16(I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+ (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
+ cld_filter.element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
+ cld_filter.element.queue_number =
rte_cpu_to_le_16(tunnel_filter->queue_id);
if (!tunnel_filter->is_to_vf)
@@ -8751,7 +8738,6 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
else {
if (tunnel_filter->vf_id >= pf->vf_num) {
PMD_DRV_LOG(ERR, "Invalid argument.");
- rte_free(cld_filter);
return -EINVAL;
}
vf = &pf->vfs[tunnel_filter->vf_id];
@@ -8760,38 +8746,36 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
/* Check if there is the filter in SW list */
memset(&check_filter, 0, sizeof(check_filter));
- i40e_tunnel_filter_convert(cld_filter, &check_filter);
+ i40e_tunnel_filter_convert(&cld_filter, &check_filter);
check_filter.is_to_vf = tunnel_filter->is_to_vf;
check_filter.vf_id = tunnel_filter->vf_id;
node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &check_filter.input);
if (add && node) {
PMD_DRV_LOG(ERR, "Conflict with existing tunnel rules!");
- rte_free(cld_filter);
return -EINVAL;
}
if (!add && !node) {
PMD_DRV_LOG(ERR, "There's no corresponding tunnel filter!");
- rte_free(cld_filter);
return -EINVAL;
}
if (add) {
+ struct i40e_tunnel_filter *tunnel;
+
if (big_buffer)
ret = i40e_aq_add_cloud_filters_bb(hw,
- vsi->seid, cld_filter, 1);
+ vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_add_cloud_filters(hw,
- vsi->seid, &cld_filter->element, 1);
+ vsi->seid, &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to add a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
tunnel = rte_zmalloc("tunnel_filter", sizeof(*tunnel), 0);
if (tunnel == NULL) {
PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- rte_free(cld_filter);
return -ENOMEM;
}
@@ -8802,19 +8786,17 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
} else {
if (big_buffer)
ret = i40e_aq_rem_cloud_filters_bb(
- hw, vsi->seid, cld_filter, 1);
+ hw, vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter->element, 1);
+ &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to delete a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
ret = i40e_sw_tunnel_filter_del(pf, &node->input);
}
- rte_free(cld_filter);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 10/26] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (8 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 09/26] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
` (15 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This memory does
not need to be stored in hugepage memory and the maximum LUT size is 512
bytes, so replace it with stack-based allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 25 ++++---------------------
1 file changed, 4 insertions(+), 21 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index f9e86b82b7..ba66f9e3fd 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4619,7 +4619,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512] = {0};
int ret;
if (reta_size != lut_size ||
@@ -4630,14 +4630,9 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
ret = i40e_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
shift = i % RTE_ETH_RETA_GROUP_SIZE;
@@ -4648,9 +4643,6 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
pf->adapter->rss_reta_updated = 1;
-out:
- rte_free(lut);
-
return ret;
}
@@ -4662,7 +4654,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512] = {0};
int ret;
if (reta_size != lut_size ||
@@ -4673,15 +4665,9 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
-
ret = i40e_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
shift = i % RTE_ETH_RETA_GROUP_SIZE;
@@ -4689,9 +4675,6 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
reta_conf[idx].reta[shift] = lut[i];
}
-out:
- rte_free(lut);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (9 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 10/26] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 12/26] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
` (14 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding, removing, or configuring MAC and VLAN filters,
we are using rte_zmalloc followed by an immediate rte_free. This memory
does not need to be stored in hugepage memory, so replace it with regular
malloc/free or stack allocation where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 135 +++++++++++---------------
drivers/net/intel/i40e/rte_pmd_i40e.c | 17 ++--
2 files changed, 65 insertions(+), 87 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index ba66f9e3fd..672d337d99 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6,6 +6,7 @@
#include <stdio.h>
#include <errno.h>
#include <stdint.h>
+#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
@@ -4128,7 +4129,6 @@ static int
i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_mac_filter_info *mac_filter;
struct i40e_vsi *vsi = pf->main_vsi;
struct rte_eth_rxmode *rxmode;
struct i40e_mac_filter *f;
@@ -4163,12 +4163,12 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ struct i40e_mac_filter_info mac_filter[I40E_NUM_MACADDR_MAX] = {0};
i = 0;
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
- if (mac_filter == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
+
+ if (num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Too many MAC addresses");
return I40E_ERR_NO_MEMORY;
}
@@ -4206,7 +4206,6 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (ret)
PMD_DRV_LOG(ERR, "i40e vsi add mac fail.");
}
- rte_free(mac_filter);
}
if (mask & RTE_ETH_QINQ_STRIP_MASK) {
@@ -6198,7 +6197,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
int i, num;
struct i40e_mac_filter *f;
void *temp;
- struct i40e_mac_filter_info *mac_filter;
+ struct i40e_mac_filter_info mac_filter[I40E_NUM_MACADDR_MAX] = {0};
enum i40e_mac_filter_type desired_filter;
int ret = I40E_SUCCESS;
@@ -6211,12 +6210,9 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
}
num = vsi->mac_num;
-
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
- if (mac_filter == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Too many MAC addresses");
+ return -1;
}
i = 0;
@@ -6228,7 +6224,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
if (ret) {
PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan filter",
on ? "enable" : "disable");
- goto DONE;
+ return ret;
}
i++;
}
@@ -6240,13 +6236,11 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
if (ret) {
PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan filter",
on ? "enable" : "disable");
- goto DONE;
+ return ret;
}
}
-DONE:
- rte_free(mac_filter);
- return ret;
+ return 0;
}
/* Configure vlan stripping on or off */
@@ -7128,19 +7122,20 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
uint16_t flags;
int ret = I40E_SUCCESS;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- struct i40e_aqc_add_macvlan_element_data *req_list;
+ uint8_t aq_buff[I40E_AQ_BUF_SZ] = {0};
+ struct i40e_aqc_add_macvlan_element_data *req_list =
+ (struct i40e_aqc_add_macvlan_element_data *)aq_buff;
+
+ if (hw->aq.asq_buf_size > I40E_AQ_BUF_SZ) {
+ PMD_DRV_LOG(ERR, "AdminQ size biffer than max");
+ return I40E_ERR_NO_MEMORY;
+ }
if (filter == NULL || total == 0)
return I40E_ERR_PARAM;
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0);
- if (req_list == NULL) {
- PMD_DRV_LOG(ERR, "Fail to allocate memory");
- return I40E_ERR_NO_MEMORY;
- }
-
num = 0;
do {
actual_num = (num + ele_num > total) ? (total - num) : ele_num;
@@ -7169,8 +7164,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
break;
default:
PMD_DRV_LOG(ERR, "Invalid MAC match type");
- ret = I40E_ERR_PARAM;
- goto DONE;
+ return I40E_ERR_PARAM;
}
req_list[i].queue_number = 0;
@@ -7182,14 +7176,11 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
actual_num, NULL);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to add macvlan filter");
- goto DONE;
+ return ret;
}
num += actual_num;
} while (num < total);
-
-DONE:
- rte_free(req_list);
- return ret;
+ return I40E_SUCCESS;
}
int
@@ -7202,21 +7193,22 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
uint16_t flags;
int ret = I40E_SUCCESS;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- struct i40e_aqc_remove_macvlan_element_data *req_list;
+ uint8_t aq_buff[I40E_AQ_BUF_SZ] = {0};
+ struct i40e_aqc_remove_macvlan_element_data *req_list =
+ (struct i40e_aqc_remove_macvlan_element_data *)aq_buff;
enum i40e_admin_queue_err aq_status;
if (filter == NULL || total == 0)
return I40E_ERR_PARAM;
- ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
- ele_buff_size = hw->aq.asq_buf_size;
-
- req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0);
- if (req_list == NULL) {
- PMD_DRV_LOG(ERR, "Fail to allocate memory");
+ if (hw->aq.asq_buf_size > I40E_AQ_BUF_SZ) {
+ PMD_DRV_LOG(ERR, "AdminQ size biffer than max");
return I40E_ERR_NO_MEMORY;
}
+ ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
+ ele_buff_size = hw->aq.asq_buf_size;
+
num = 0;
do {
actual_num = (num + ele_num > total) ? (total - num) : ele_num;
@@ -7245,8 +7237,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
break;
default:
PMD_DRV_LOG(ERR, "Invalid MAC filter type");
- ret = I40E_ERR_PARAM;
- goto DONE;
+ return I40E_ERR_PARAM;
}
req_list[i].flags = rte_cpu_to_le_16(flags);
}
@@ -7260,15 +7251,13 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
ret = I40E_SUCCESS;
} else {
PMD_DRV_LOG(ERR, "Failed to remove macvlan filter");
- goto DONE;
+ return ret;
}
}
num += actual_num;
} while (num < total);
-DONE:
- rte_free(req_list);
- return ret;
+ return I40E_SUCCESS;
}
/* Find out specific MAC filter */
@@ -7436,7 +7425,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
else
num = vsi->mac_num * vsi->vlan_num;
- mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0);
+ mv_f = calloc(num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7465,7 +7454,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
ret = i40e_remove_macvlan_filters(vsi, mv_f, num);
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7473,7 +7462,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
int
i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
{
- struct i40e_macvlan_filter *mv_f;
+ struct i40e_macvlan_filter mv_f[I40E_NUM_MACADDR_MAX] = {0};
int mac_num;
int ret = I40E_SUCCESS;
@@ -7490,37 +7479,31 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr");
return I40E_ERR_PARAM;
}
-
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
-
- if (mv_f == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (mac_num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Error! Too many MAC addresses");
+ return I40E_ERR_PARAM;
}
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
i40e_set_vlan_filter(vsi, vlan, 1);
vsi->vlan_num++;
- ret = I40E_SUCCESS;
-DONE:
- rte_free(mv_f);
- return ret;
+ return I40E_SUCCESS;
}
int
i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
{
- struct i40e_macvlan_filter *mv_f;
+ struct i40e_macvlan_filter mv_f[I40E_NUM_MACADDR_MAX] = {0};
int mac_num;
int ret = I40E_SUCCESS;
@@ -7541,42 +7524,36 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr");
return I40E_ERR_PARAM;
}
-
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
-
- if (mv_f == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (mac_num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Error! Too many MAC addresses");
+ return I40E_ERR_PARAM;
}
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_remove_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
/* This is last vlan to remove, replace all mac filter with vlan 0 */
if (vsi->vlan_num == 1) {
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, 0);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
}
i40e_set_vlan_filter(vsi, vlan, 0);
vsi->vlan_num--;
- ret = I40E_SUCCESS;
-DONE:
- rte_free(mv_f);
- return ret;
+ return I40E_SUCCESS;
}
int
@@ -7607,7 +7584,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
mac_filter->filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7646,7 +7623,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7677,7 +7654,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7706,7 +7683,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index a358f68bc5..4839a1d9bf 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -2,6 +2,7 @@
* Copyright(c) 2010-2017 Intel Corporation
*/
+#include <stdlib.h>
#include <eal_export.h>
#include <rte_string_fns.h>
#include <rte_malloc.h>
@@ -233,7 +234,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -250,18 +251,18 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
@@ -294,7 +295,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
f->mac_info.filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -312,18 +313,18 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 12/26] net/i40e: avoid rte malloc in VF resource queries
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (10 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 13/26] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
` (13 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when responding to VF resource queries, we are dynamically
allocating a temporary buffer with rte_zmalloc followed by an immediate
rte_free. This memory does not need to be stored in hugepage memory and
the allocation size is pretty small, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/i40e_pf.c | 26 +++++++-------------------
1 file changed, 7 insertions(+), 19 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_pf.c b/drivers/net/intel/i40e/i40e_pf.c
index ebe1deeade..08cdd6bc4d 100644
--- a/drivers/net/intel/i40e/i40e_pf.c
+++ b/drivers/net/intel/i40e/i40e_pf.c
@@ -309,9 +309,14 @@ static int
i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
bool b_op)
{
- struct virtchnl_vf_resource *vf_res = NULL;
+ /* only have 1 VSI by default */
+ struct {
+ struct virtchnl_vf_resource vf_res;
+ struct virtchnl_vsi_resource vsi_res;
+ } res = {0};
+ struct virtchnl_vf_resource *vf_res = &res.vf_res;
struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
- uint32_t len = 0;
+ uint32_t len = sizeof(res);
uint64_t default_hena = I40E_RSS_HENA_ALL;
int ret = I40E_SUCCESS;
@@ -322,20 +327,6 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
return ret;
}
- /* only have 1 VSI by default */
- len = sizeof(struct virtchnl_vf_resource) +
- I40E_DEFAULT_VF_VSI_NUM *
- sizeof(struct virtchnl_vsi_resource);
-
- vf_res = rte_zmalloc("i40e_vf_res", len, 0);
- if (vf_res == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate mem");
- ret = I40E_ERR_NO_MEMORY;
- vf_res = NULL;
- len = 0;
- goto send_msg;
- }
-
if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
VIRTCHNL_VF_OFFLOAD_VLAN;
@@ -377,11 +368,8 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
rte_ether_addr_copy(&vf->mac_addr,
(struct rte_ether_addr *)vf_res->vsi_res[0].default_mac_addr);
-send_msg:
i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_GET_VF_RESOURCES,
ret, (uint8_t *)vf_res, len);
- rte_free(vf_res);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 13/26] net/i40e: avoid rte malloc in adminq operations
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (11 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 12/26] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 14/26] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
` (12 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing admin queue messages, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 672d337d99..cd648285d1 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6870,14 +6870,11 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_arq_event_info info;
uint16_t pending, opcode;
+ uint8_t msg_buf[I40E_AQ_BUF_SZ] = {0};
int ret;
- info.buf_len = I40E_AQ_BUF_SZ;
- info.msg_buf = rte_zmalloc("msg_buffer", info.buf_len, 0);
- if (!info.msg_buf) {
- PMD_DRV_LOG(ERR, "Failed to allocate mem");
- return;
- }
+ info.buf_len = sizeof(msg_buf);
+ info.msg_buf = msg_buf;
pending = 1;
while (pending) {
@@ -6913,7 +6910,6 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
break;
}
}
- rte_free(info.msg_buf);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 14/26] net/i40e: avoid rte malloc in DDP package handling
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (12 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 13/26] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 15/26] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
` (11 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing Dynamic Driver Profile (DDP) packages and
checking profile information, we are using rte_zmalloc followed by
immediate rte_free. This memory does not need to be stored in hugepage
memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/rte_pmd_i40e.c | 43 +++++----------------------
1 file changed, 8 insertions(+), 35 deletions(-)
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index 4839a1d9bf..7892fa8a4e 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -1557,7 +1557,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
{
struct rte_eth_dev *dev = &rte_eth_devices[port];
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- uint8_t *buff;
+ uint8_t buff[(I40E_MAX_PROFILE_NUM + 4) * I40E_PROFILE_INFO_SIZE] = {0};
struct rte_pmd_i40e_profile_list *p_list;
struct rte_pmd_i40e_profile_info *pinfo, *p;
uint32_t i;
@@ -1570,13 +1570,6 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
PMD_DRV_LOG(INFO, "Read-only profile.");
return 0;
}
- buff = rte_zmalloc("pinfo_list",
- (I40E_PROFILE_INFO_SIZE * I40E_MAX_PROFILE_NUM + 4),
- 0);
- if (!buff) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return -1;
- }
ret = i40e_aq_get_ddp_list(
hw, (void *)buff,
@@ -1584,7 +1577,6 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
0, NULL);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get profile info list.");
- rte_free(buff);
return -1;
}
p_list = (struct rte_pmd_i40e_profile_list *)buff;
@@ -1592,20 +1584,17 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
p = &p_list->p_info[i];
if (pinfo->track_id == p->track_id) {
PMD_DRV_LOG(INFO, "Profile exists.");
- rte_free(buff);
return 1;
}
}
/* profile with group id 0xff is compatible with any other profile */
if ((pinfo->track_id & group_mask) == group_mask) {
- rte_free(buff);
return 0;
}
for (i = 0; i < p_list->p_count; i++) {
p = &p_list->p_info[i];
if ((p->track_id & group_mask) == 0) {
PMD_DRV_LOG(INFO, "Profile of the group 0 exists.");
- rte_free(buff);
return 2;
}
}
@@ -1616,12 +1605,9 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
if ((pinfo->track_id & group_mask) !=
(p->track_id & group_mask)) {
PMD_DRV_LOG(INFO, "Profile of different group exists.");
- rte_free(buff);
return 3;
}
}
-
- rte_free(buff);
return 0;
}
@@ -1637,7 +1623,10 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
struct i40e_generic_seg_header *profile_seg_hdr;
struct i40e_generic_seg_header *metadata_seg_hdr;
uint32_t track_id;
- uint8_t *profile_info_sec;
+ struct {
+ struct i40e_profile_section_header sec;
+ struct i40e_profile_info info;
+ } profile_info_sec = {0};
int is_exist;
enum i40e_status_code status = I40E_SUCCESS;
static const uint32_t type_mask = 0xff000000;
@@ -1702,26 +1691,15 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
return -EINVAL;
}
- profile_info_sec = rte_zmalloc(
- "i40e_profile_info",
- sizeof(struct i40e_profile_section_header) +
- sizeof(struct i40e_profile_info),
- 0);
- if (!profile_info_sec) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- return -EINVAL;
- }
-
/* Check if the profile already loaded */
i40e_generate_profile_info_sec(
((struct i40e_profile_segment *)profile_seg_hdr)->name,
&((struct i40e_profile_segment *)profile_seg_hdr)->version,
- track_id, profile_info_sec,
+ track_id, (uint8_t* )&profile_info_sec,
op == RTE_PMD_I40E_PKG_OP_WR_ADD);
- is_exist = i40e_check_profile_info(port, profile_info_sec);
+ is_exist = i40e_check_profile_info(port, (uint8_t* )&profile_info_sec);
if (is_exist < 0) {
PMD_DRV_LOG(ERR, "Failed to check profile.");
- rte_free(profile_info_sec);
return -EINVAL;
}
@@ -1734,13 +1712,11 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
else if (is_exist == 3)
PMD_DRV_LOG(ERR, "Profile of different group already exists");
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return -EEXIST;
}
} else if (op == RTE_PMD_I40E_PKG_OP_WR_DEL) {
if (is_exist != 1) {
PMD_DRV_LOG(ERR, "Profile does not exist.");
- rte_free(profile_info_sec);
return -EACCES;
}
}
@@ -1752,7 +1728,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
track_id);
if (status) {
PMD_DRV_LOG(ERR, "Failed to write profile for delete.");
- rte_free(profile_info_sec);
return status;
}
} else {
@@ -1765,14 +1740,13 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
PMD_DRV_LOG(ERR, "Failed to write profile for add.");
else
PMD_DRV_LOG(ERR, "Failed to write profile.");
- rte_free(profile_info_sec);
return status;
}
}
if (track_id && (op != RTE_PMD_I40E_PKG_OP_WR_ONLY)) {
/* Modify loaded profiles info list */
- status = i40e_add_rm_profile_info(hw, profile_info_sec);
+ status = i40e_add_rm_profile_info(hw, (uint8_t* )&profile_info_sec);
if (status) {
if (op == RTE_PMD_I40E_PKG_OP_WR_ADD)
PMD_DRV_LOG(ERR, "Failed to add profile to info list.");
@@ -1785,7 +1759,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
op == RTE_PMD_I40E_PKG_OP_WR_DEL)
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return status;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 15/26] net/i40e: avoid rte malloc in DDP ptype handling
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (13 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 14/26] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 16/26] net/iavf: remove remnants of pipeline mode Anatoly Burakov
` (10 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating customized protocol and packet type information
via DDP packages, we are using rte_zmalloc followed by immediate
rte_free. This memory does not need to be stored in hugepage memory, so
replace it with stack allocation or regular calloc/free as appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 43 ++++++++--------------------
1 file changed, 12 insertions(+), 31 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index cd648285d1..af736f59be 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -11716,8 +11716,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint32_t pctype_num;
- struct rte_pmd_i40e_ptype_info *pctype;
- uint32_t buff_size;
+ struct rte_pmd_i40e_ptype_info pctype[I40E_CUSTOMIZED_MAX] = {0};
struct i40e_customized_pctype *new_pctype = NULL;
uint8_t proto_id;
uint8_t pctype_value;
@@ -11743,19 +11742,16 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
return -1;
}
- buff_size = pctype_num * sizeof(struct rte_pmd_i40e_proto_info);
- pctype = rte_zmalloc("new_pctype", buff_size, 0);
- if (!pctype) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
+ if (pctype_num > RTE_DIM(pctype)) {
+ PMD_DRV_LOG(ERR, "Pctype number exceeds maximum supported");
return -1;
}
/* get information about new pctype list */
ret = rte_pmd_i40e_get_ddp_info(pkg, pkg_size,
- (uint8_t *)pctype, buff_size,
+ (uint8_t *)pctype, sizeof(pctype),
RTE_PMD_I40E_PKG_INFO_PCTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get pctype list");
- rte_free(pctype);
return -1;
}
@@ -11836,7 +11832,6 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
}
- rte_free(pctype);
return 0;
}
@@ -11846,11 +11841,10 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
struct rte_pmd_i40e_proto_info *proto,
enum rte_pmd_i40e_package_op op)
{
- struct rte_pmd_i40e_ptype_mapping *ptype_mapping;
+ struct rte_pmd_i40e_ptype_mapping ptype_mapping[I40E_MAX_PKT_TYPE] = {0};
uint16_t port_id = dev->data->port_id;
uint32_t ptype_num;
- struct rte_pmd_i40e_ptype_info *ptype;
- uint32_t buff_size;
+ struct rte_pmd_i40e_ptype_info ptype[I40E_MAX_PKT_TYPE] = {0};
uint8_t proto_id;
char name[RTE_PMD_I40E_DDP_NAME_SIZE];
uint32_t i, j, n;
@@ -11881,31 +11875,20 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
return -1;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_info);
- ptype = rte_zmalloc("new_ptype", buff_size, 0);
- if (!ptype) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
+ if (ptype_num > RTE_DIM(ptype)) {
+ PMD_DRV_LOG(ERR, "Too many ptypes");
return -1;
}
/* get information about new ptype list */
ret = rte_pmd_i40e_get_ddp_info(pkg, pkg_size,
- (uint8_t *)ptype, buff_size,
+ (uint8_t *)ptype, sizeof(ptype),
RTE_PMD_I40E_PKG_INFO_PTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get ptype list");
- rte_free(ptype);
return ret;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_mapping);
- ptype_mapping = rte_zmalloc("ptype_mapping", buff_size, 0);
- if (!ptype_mapping) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- rte_free(ptype);
- return -1;
- }
-
/* Update ptype mapping table. */
for (i = 0; i < ptype_num; i++) {
ptype_mapping[i].hw_ptype = ptype[i].ptype_id;
@@ -12040,8 +12023,6 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(ERR, "Failed to update ptype mapping table.");
- rte_free(ptype_mapping);
- rte_free(ptype);
return ret;
}
@@ -12076,7 +12057,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = proto_num * sizeof(struct rte_pmd_i40e_proto_info);
- proto = rte_zmalloc("new_proto", buff_size, 0);
+ proto = calloc(proto_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!proto) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return;
@@ -12088,7 +12069,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PROTOCOL_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get protocol list");
- rte_free(proto);
+ free(proto);
return;
}
@@ -12126,7 +12107,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(INFO, "No ptype is updated.");
- rte_free(proto);
+ free(proto);
}
/* Create a QinQ cloud filter
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 16/26] net/iavf: remove remnants of pipeline mode
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (14 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 15/26] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 17/26] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
` (9 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
When pipeline mode was removed, some of the things used by pipelines were
left in the code. Remove them as they are unused.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
drivers/net/intel/iavf/iavf_hash.c | 1 -
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
5 files changed, 19 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c
index 0ef6e0d04a..9eae874800 100644
--- a/drivers/net/intel/iavf/iavf_fdir.c
+++ b/drivers/net/intel/iavf/iavf_fdir.c
@@ -1632,7 +1632,6 @@ static struct iavf_flow_parser iavf_fdir_parser = {
.array = iavf_fdir_pattern,
.array_len = RTE_DIM(iavf_fdir_pattern),
.parse_pattern_action = iavf_fdir_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fdir_engine_register)
diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c
index cf1030320f..bfb34695de 100644
--- a/drivers/net/intel/iavf/iavf_fsub.c
+++ b/drivers/net/intel/iavf/iavf_fsub.c
@@ -814,7 +814,6 @@ iavf_flow_parser iavf_fsub_parser = {
.array = iavf_fsub_pattern_list,
.array_len = RTE_DIM(iavf_fsub_pattern_list),
.parse_pattern_action = iavf_fsub_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fsub_engine_init)
diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h
index b11bb4cf2b..b97cf8b7ff 100644
--- a/drivers/net/intel/iavf/iavf_generic_flow.h
+++ b/drivers/net/intel/iavf/iavf_generic_flow.h
@@ -485,20 +485,6 @@ enum iavf_flow_engine_type {
IAVF_FLOW_ENGINE_MAX,
};
-/**
- * classification stages.
- * for non-pipeline mode, we have two classification stages: Distributor/RSS
- * for pipeline-mode we have three classification stages:
- * Permission/Distributor/RSS
- */
-enum iavf_flow_classification_stage {
- IAVF_FLOW_STAGE_NONE = 0,
- IAVF_FLOW_STAGE_IPSEC_CRYPTO,
- IAVF_FLOW_STAGE_RSS,
- IAVF_FLOW_STAGE_DISTRIBUTOR,
- IAVF_FLOW_STAGE_MAX,
-};
-
/* Struct to store engine created. */
struct iavf_flow_engine {
TAILQ_ENTRY(iavf_flow_engine) node;
@@ -527,7 +513,6 @@ struct iavf_flow_parser {
struct iavf_pattern_match_item *array;
uint32_t array_len;
parse_pattern_action_t parse_pattern_action;
- enum iavf_flow_classification_stage stage;
};
/* Struct to store parser created. */
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index 1725c2b2b9..a40fed7542 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -691,7 +691,6 @@ static struct iavf_flow_parser iavf_hash_parser = {
.array = iavf_hash_pattern_list,
.array_len = RTE_DIM(iavf_hash_pattern_list),
.parse_pattern_action = iavf_hash_parse_pattern_action,
- .stage = IAVF_FLOW_STAGE_RSS,
};
int
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index ab41b1973e..82323b9aa9 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -1983,7 +1983,6 @@ static struct iavf_flow_parser iavf_ipsec_flow_parser = {
.array = iavf_ipsec_flow_pattern,
.array_len = RTE_DIM(iavf_ipsec_flow_pattern),
.parse_pattern_action = iavf_ipsec_flow_parse,
- .stage = IAVF_FLOW_STAGE_IPSEC_CRYPTO,
};
RTE_INIT(iavf_ipsec_flow_engine_register)
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 17/26] net/iavf: decouple hash uninit from parser uninit
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (15 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 16/26] net/iavf: remove remnants of pipeline mode Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 18/26] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
` (8 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, parser deinitialization will trigger removal of current RSS
configuration. This should not be done as part of parser deinitialization,
but should rather be a separate step in dev close flow.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/iavf/iavf.h | 1 +
drivers/net/intel/iavf/iavf_ethdev.c | 3 +++
drivers/net/intel/iavf/iavf_hash.c | 12 ++++++++----
3 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 39949acc11..6054321771 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -566,4 +566,5 @@ void iavf_dev_watchdog_disable(struct iavf_adapter *adapter);
void iavf_handle_hw_reset(struct rte_eth_dev *dev, bool vf_initiated_reset);
void iavf_set_no_poll(struct iavf_adapter *adapter, bool link_change);
bool is_iavf_supported(struct rte_eth_dev *dev);
+void iavf_hash_uninit(struct iavf_adapter *ad);
#endif /* _IAVF_ETHDEV_H_ */
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 954bce723d..b45da4d8b1 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -2977,6 +2977,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
/* free iAVF security device context all related resources */
iavf_security_ctx_destroy(adapter);
+ /* remove RSS configuration */
+ iavf_hash_uninit(adapter);
+
iavf_flow_flush(dev, NULL);
iavf_flow_uninit(adapter);
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index a40fed7542..cb10eeab78 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -77,7 +77,7 @@ static int
iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
struct rte_flow_error *error);
static void
-iavf_hash_uninit(struct iavf_adapter *ad);
+iavf_hash_uninit_parser(struct iavf_adapter *ad);
static void
iavf_hash_free(struct rte_flow *flow);
static int
@@ -680,7 +680,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
.init = iavf_hash_init,
.create = iavf_hash_create,
.destroy = iavf_hash_destroy,
- .uninit = iavf_hash_uninit,
+ .uninit = iavf_hash_uninit_parser,
.free = iavf_hash_free,
.type = IAVF_FLOW_ENGINE_HASH,
};
@@ -1641,6 +1641,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
}
static void
+iavf_hash_uninit_parser(struct iavf_adapter *ad)
+{
+ iavf_unregister_parser(&iavf_hash_parser, ad);
+}
+
+void
iavf_hash_uninit(struct iavf_adapter *ad)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
@@ -1658,8 +1664,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
PMD_DRV_LOG(ERR, "fail to delete default RSS");
-
- iavf_unregister_parser(&iavf_hash_parser, ad);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 18/26] net/iavf: avoid rte malloc in VF mailbox for IPsec
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (16 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 17/26] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 19/26] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
` (7 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when calling down into the VF mailbox, IPsec code will use
rte_malloc to allocate VF message structures. This memory does not need
to be stored in hugepage memory and the allocation size is pretty small,
so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 60 ++++++++++------------
1 file changed, 28 insertions(+), 32 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 82323b9aa9..fe540e76cb 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -467,7 +467,7 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sa_cfg);
- request = rte_malloc("iavf-sad-add-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -475,7 +475,7 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sa_cfg_resp);
- response = rte_malloc("iavf-sad-add-response", response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -553,8 +553,8 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.sa_cfg_resp->sa_handle;
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
@@ -728,8 +728,7 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sp_cfg);
- request = rte_malloc("iavf-inbound-security-policy-add-request",
- request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -770,8 +769,7 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sp_cfg_resp);
- response = rte_malloc("iavf-inbound-security-policy-add-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -792,8 +790,8 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
rc = response->ipsec_data.sp_cfg_resp->rule_id;
update_cleanup:
- rte_free(request);
- rte_free(response);
+ free(request);
+ free(response);
return rc;
}
@@ -808,7 +806,7 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sa_update);
- request = rte_malloc("iavf-sa-update-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -816,7 +814,7 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-update-response", response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -846,8 +844,8 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
rc = response->ipsec_data.ipsec_resp->resp;
update_cleanup:
- rte_free(request);
- rte_free(response);
+ free(request);
+ free(response);
return rc;
}
@@ -905,7 +903,7 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sp_destroy);
- request = rte_malloc("iavf-sp-del-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -913,7 +911,7 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sp-del-response", response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -944,8 +942,8 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
return response->ipsec_data.ipsec_status->status;
update_cleanup:
- rte_free(request);
- rte_free(response);
+ free(request);
+ free(response);
return rc;
}
@@ -962,7 +960,7 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sa_destroy);
- request = rte_malloc("iavf-sa-del-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -971,7 +969,7 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-del-response", response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1013,8 +1011,8 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
rc = -EFAULT;
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
@@ -1168,7 +1166,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-capability-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1176,8 +1174,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-capability-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1203,8 +1200,8 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
memcpy(capability, response->ipsec_data.ipsec_cap, sizeof(*capability));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
@@ -1593,7 +1590,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-status-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1601,8 +1598,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_status);
- response = rte_malloc("iavf-device-status-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1628,8 +1624,8 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
memcpy(status, response->ipsec_data.ipsec_status, sizeof(*status));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 19/26] net/iavf: avoid rte malloc in RSS configuration
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (17 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 18/26] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 20/26] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
` (6 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring RSS (redirection table, lookup table, and
hash key), we are using rte_zmalloc followed by an immediate rte_free.
This memory does not need to be stored in hugepage memory, and in context
of IAVF we do not define how big these structures can be, so replace the
allocations with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++--
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index b45da4d8b1..4e0df2ca05 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1553,7 +1553,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1573,7 +1573,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = iavf_configure_rss_lut(adapter);
if (ret) /* revert back */
rte_memcpy(vf->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 9ad39300c6..55986ef909 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1159,7 +1159,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
- rss_lut = rte_zmalloc("rss_lut", len, 0);
+ rss_lut = calloc(1, len);
if (!rss_lut)
return -ENOMEM;
@@ -1178,7 +1178,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_LUT");
- rte_free(rss_lut);
+ free(rss_lut);
return err;
}
@@ -1191,7 +1191,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
- rss_key = rte_zmalloc("rss_key", len, 0);
+ rss_key = calloc(1, len);
if (!rss_key)
return -ENOMEM;
@@ -1210,7 +1210,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_KEY");
- rte_free(rss_key);
+ free(rss_key);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 20/26] net/iavf: avoid rte malloc in MAC address operations
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (18 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 19/26] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 21/26] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
` (5 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This memory does not need
to be stored in hugepage memory, so replace it with stack allocation.
The original code also had a loop attempting to split up sending MAC
address list into multiple virtchnl messages. However, maximum number of
MAC addresses is 64, sizeof() each MAC address is 8 bytes, and maximum
virtchnl message size is 4K, so splitting it up is actually unnecessary.
This loop has been removed.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 82 ++++++++++-------------------
1 file changed, 29 insertions(+), 53 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 55986ef909..f44dc7e7be 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1380,63 +1380,39 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
void
iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
{
- struct virtchnl_ether_addr_list *list;
+ struct {
+ struct virtchnl_ether_addr_list list;
+ struct virtchnl_ether_addr addr[IAVF_NUM_MACADDR_MAX];
+ } list_req = {0};
+ struct virtchnl_ether_addr_list *list = &list_req.list;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct rte_ether_addr *addr;
- struct iavf_cmd_info args;
- int len, err, i, j;
- int next_begin = 0;
- int begin = 0;
+ struct iavf_cmd_info args = {0};
+ int err, i;
- do {
- j = 0;
- len = sizeof(struct virtchnl_ether_addr_list);
- for (i = begin; i < IAVF_NUM_MACADDR_MAX; i++, next_begin++) {
- addr = &adapter->dev_data->mac_addrs[i];
- if (rte_is_zero_ether_addr(addr))
- continue;
- len += sizeof(struct virtchnl_ether_addr);
- if (len >= IAVF_AQ_BUF_SZ) {
- next_begin = i + 1;
- break;
- }
- }
+ for (i = 0; i < IAVF_NUM_MACADDR_MAX; i++) {
+ struct rte_ether_addr *addr = &adapter->dev_data->mac_addrs[i];
+ struct virtchnl_ether_addr *vc_addr = &list->list[list->num_elements];
- list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
- if (!list) {
- PMD_DRV_LOG(ERR, "fail to allocate memory");
- return;
- }
+ /* ignore empty addresses */
+ if (rte_is_zero_ether_addr(addr))
+ continue;
+ list->num_elements++;
- for (i = begin; i < next_begin; i++) {
- addr = &adapter->dev_data->mac_addrs[i];
- if (rte_is_zero_ether_addr(addr))
- continue;
- rte_memcpy(list->list[j].addr, addr->addr_bytes,
- sizeof(addr->addr_bytes));
- list->list[j].type = (j == 0 ?
- VIRTCHNL_ETHER_ADDR_PRIMARY :
- VIRTCHNL_ETHER_ADDR_EXTRA);
- PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
- RTE_ETHER_ADDR_BYTES(addr));
- j++;
- }
- list->vsi_id = vf->vsi_res->vsi_id;
- list->num_elements = j;
- args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
- VIRTCHNL_OP_DEL_ETH_ADDR;
- args.in_args = (uint8_t *)list;
- args.in_args_size = len;
- args.out_buffer = vf->aq_resp;
- args.out_size = IAVF_AQ_BUF_SZ;
- err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
- if (err)
- PMD_DRV_LOG(ERR, "fail to execute command %s",
- add ? "OP_ADD_ETHER_ADDRESS" :
- "OP_DEL_ETHER_ADDRESS");
- rte_free(list);
- begin = next_begin;
- } while (begin < IAVF_NUM_MACADDR_MAX);
+ memcpy(vc_addr->addr, addr->addr_bytes, sizeof(addr->addr_bytes));
+ vc_addr->type = (list->num_elements == 1) ?
+ VIRTCHNL_ETHER_ADDR_PRIMARY :
+ VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+ list->vsi_id = vf->vsi_res->vsi_id;
+ args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.in_args = (uint8_t *)list;
+ args.in_args_size = sizeof(list_req);
+ args.out_buffer = vf->aq_resp;
+ args.out_size = IAVF_AQ_BUF_SZ;
+ err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" : "OP_DEL_ETHER_ADDRESS");
}
int
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 21/26] net/iavf: avoid rte malloc in queue operations
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (19 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 20/26] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 22/26] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
` (4 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when enabling, disabling, or switching queues, we are using
rte_malloc followed by an immediate rte_free. This is not needed as these
structures are not being stored anywhere, so replace them with stack
allocation.
The original code did not check maximum queue number, because the design
was built around an anti-pattern of caller having to chunk the queue
configuration. This has now been corrected as well.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf.h | 3 +-
drivers/net/intel/iavf/iavf_ethdev.c | 15 +-
drivers/net/intel/iavf/iavf_vchnl.c | 212 ++++++++++++++-------------
3 files changed, 115 insertions(+), 115 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 6054321771..77a2c94290 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -503,8 +503,7 @@ int iavf_disable_queues(struct iavf_adapter *adapter);
int iavf_disable_queues_lv(struct iavf_adapter *adapter);
int iavf_configure_rss_lut(struct iavf_adapter *adapter);
int iavf_configure_rss_key(struct iavf_adapter *adapter);
-int iavf_configure_queues(struct iavf_adapter *adapter,
- uint16_t num_queue_pairs, uint16_t index);
+int iavf_configure_queues(struct iavf_adapter *adapter, uint16_t num_queue_pairs);
int iavf_get_supported_rxdid(struct iavf_adapter *adapter);
int iavf_config_vlan_strip_v2(struct iavf_adapter *adapter, bool enable);
int iavf_config_vlan_insert_v2(struct iavf_adapter *adapter, bool enable);
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 4e0df2ca05..6e216f4c0f 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1036,20 +1036,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
if (iavf_set_vf_quanta_size(adapter, index, num_queue_pairs) != 0)
PMD_DRV_LOG(WARNING, "configure quanta size failed");
- /* If needed, send configure queues msg multiple times to make the
- * adminq buffer length smaller than the 4K limitation.
- */
- while (num_queue_pairs > IAVF_CFG_Q_NUM_PER_BUF) {
- if (iavf_configure_queues(adapter,
- IAVF_CFG_Q_NUM_PER_BUF, index) != 0) {
- PMD_DRV_LOG(ERR, "configure queues failed");
- goto error;
- }
- num_queue_pairs -= IAVF_CFG_Q_NUM_PER_BUF;
- index += IAVF_CFG_Q_NUM_PER_BUF;
- }
-
- if (iavf_configure_queues(adapter, num_queue_pairs, index) != 0) {
+ if (iavf_configure_queues(adapter, num_queue_pairs) != 0) {
PMD_DRV_LOG(ERR, "configure queues failed");
goto error;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index f44dc7e7be..f0ab3b950b 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1020,19 +1020,15 @@ int
iavf_enable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1048,7 +1044,7 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_ENABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1056,7 +1052,6 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_ENABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1064,19 +1059,15 @@ int
iavf_disable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1092,7 +1083,7 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1100,7 +1091,6 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1109,17 +1099,14 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
bool rx, bool on)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = 1;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1138,7 +1125,7 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
else
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1146,7 +1133,6 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
PMD_DRV_LOG(ERR, "Failed to execute command of %s",
on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1214,88 +1200,116 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
return err;
}
-int
-iavf_configure_queues(struct iavf_adapter *adapter,
- uint16_t num_queue_pairs, uint16_t index)
+static void
+iavf_configure_queue_pair(struct iavf_adapter *adapter,
+ struct virtchnl_queue_pair_info *vc_qp,
+ uint16_t q_idx)
{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
struct ci_rx_queue **rxq = (struct ci_rx_queue **)adapter->dev_data->rx_queues;
struct ci_tx_queue **txq = (struct ci_tx_queue **)adapter->dev_data->tx_queues;
+
+ /* common parts */
+ vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+ vc_qp->txq.queue_id = q_idx;
+
+ vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+ vc_qp->rxq.queue_id = q_idx;
+ vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+
+ /* is this txq active? */
+ if (q_idx < adapter->dev_data->nb_tx_queues) {
+ vc_qp->txq.ring_len = txq[q_idx]->nb_tx_desc;
+ vc_qp->txq.dma_ring_addr = txq[q_idx]->tx_ring_dma;
+ }
+
+ /* is this rxq active? */
+ if (q_idx >= adapter->dev_data->nb_rx_queues)
+ return;
+
+ vc_qp->rxq.ring_len = rxq[q_idx]->nb_rx_desc;
+ vc_qp->rxq.dma_ring_addr = rxq[q_idx]->rx_ring_phys_addr;
+ vc_qp->rxq.databuffer_size = rxq[q_idx]->rx_buf_len;
+ vc_qp->rxq.crc_disable = rxq[q_idx]->crc_len != 0 ? 1 : 0;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ if (vf->supported_rxdid & RTE_BIT64(rxq[q_idx]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[q_idx]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, q_idx);
+ } else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[q_idx]->rxdid, IAVF_RXDID_LEGACY_1, q_idx);
+ vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
+ }
+
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP &&
+ vf->ptp_caps & VIRTCHNL_1588_PTP_CAP_RX_TSTAMP &&
+ rxq[q_idx]->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ vc_qp->rxq.flags |= VIRTCHNL_PTP_RX_TSTAMP;
+ }
+}
+
+static int
+iavf_configure_queue_chunk(struct iavf_adapter *adapter,
+ uint16_t chunk_sz,
+ uint16_t chunk_start)
+{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_vsi_queue_config_info *vc_config;
- struct virtchnl_queue_pair_info *vc_qp;
- struct iavf_cmd_info args;
- uint16_t i, size;
+ struct {
+ struct virtchnl_vsi_queue_config_info config;
+ struct virtchnl_queue_pair_info qp[IAVF_CFG_Q_NUM_PER_BUF];
+ } queue_req = {0};
+ struct iavf_cmd_info args = {0};
+ struct virtchnl_vsi_queue_config_info *vc_config = &queue_req.config;
+ struct virtchnl_queue_pair_info *vc_qp = vc_config->qpair;
+ uint16_t chunk_end = chunk_start + chunk_sz;
+ uint16_t i;
int err;
- size = sizeof(*vc_config) +
- sizeof(vc_config->qpair[0]) * num_queue_pairs;
- vc_config = rte_zmalloc("cfg_queue", size, 0);
- if (!vc_config)
- return -ENOMEM;
+ if (chunk_sz > IAVF_CFG_Q_NUM_PER_BUF)
+ return -EINVAL;
vc_config->vsi_id = vf->vsi_res->vsi_id;
- vc_config->num_queue_pairs = num_queue_pairs;
+ vc_config->num_queue_pairs = chunk_sz;
- for (i = index, vc_qp = vc_config->qpair;
- i < index + num_queue_pairs;
- i++, vc_qp++) {
- vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
- vc_qp->txq.queue_id = i;
+ for (i = chunk_start; i < chunk_end; i++, vc_qp++)
+ iavf_configure_queue_pair(adapter, vc_qp, i);
- /* Virtchnnl configure tx queues by pairs */
- if (i < adapter->dev_data->nb_tx_queues) {
- vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
- vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
- }
-
- vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
- vc_qp->rxq.queue_id = i;
- vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
-
- if (i >= adapter->dev_data->nb_rx_queues)
- continue;
-
- /* Virtchnnl configure rx queues by pairs */
- vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
- vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
- vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
- vc_qp->rxq.crc_disable = rxq[i]->crc_len != 0 ? 1 : 0;
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
- if (vf->supported_rxdid & RTE_BIT64(rxq[i]->rxdid)) {
- vc_qp->rxq.rxdid = rxq[i]->rxdid;
- PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
- vc_qp->rxq.rxdid, i);
- } else {
- PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
- "request default RXDID[%d] in Queue[%d]",
- rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
- vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- }
-
- if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP &&
- vf->ptp_caps & VIRTCHNL_1588_PTP_CAP_RX_TSTAMP &&
- rxq[i]->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
- vc_qp->rxq.flags |= VIRTCHNL_PTP_RX_TSTAMP;
- }
- }
-
- memset(&args, 0, sizeof(args));
args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
args.in_args = (uint8_t *)vc_config;
- args.in_args_size = size;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
if (err)
- PMD_DRV_LOG(ERR, "Failed to execute command of"
- " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
-
- rte_free(vc_config);
+ PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL_OP_CONFIG_VSI_QUEUES");
return err;
}
+int
+iavf_configure_queues(struct iavf_adapter *adapter, uint16_t num_queue_pairs)
+{
+ uint16_t c;
+ int err;
+
+ /*
+ * we cannot configure all queues in one go because they won't fit into
+ * adminq buffer, so we're going to chunk them instead
+ */
+ for (c = 0; c < num_queue_pairs; c += IAVF_CFG_Q_NUM_PER_BUF) {
+ uint16_t chunk_sz = RTE_MIN(num_queue_pairs - c, IAVF_CFG_Q_NUM_PER_BUF);
+ err = iavf_configure_queue_chunk(adapter, chunk_sz, c);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to configure queues chunk [%u, %u)",
+ c, c + chunk_sz);
+ return err;
+ }
+ }
+ return 0;
+}
+
int
iavf_config_irq_map(struct iavf_adapter *adapter)
{
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 22/26] net/iavf: avoid rte malloc in irq map config
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (20 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 21/26] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 23/26] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (3 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring IRQ maps, we are using rte_zmalloc followed
by an immediate rte_free. This memory does not need to be stored in
hugepage memory, so replace it with stack allocation.
The original code did not check maximum IRQ map size, because the design
was built around an anti-pattern of caller having to chunk the IRQ map
calls. This has now been corrected as well.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf.h | 3 +-
drivers/net/intel/iavf/iavf_ethdev.c | 15 +---
drivers/net/intel/iavf/iavf_vchnl.c | 111 +++++++++++++++++----------
3 files changed, 73 insertions(+), 56 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 77a2c94290..f9bb398a77 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -511,8 +511,7 @@ int iavf_add_del_vlan_v2(struct iavf_adapter *adapter, uint16_t vlanid,
bool add);
int iavf_get_vlan_offload_caps_v2(struct iavf_adapter *adapter);
int iavf_config_irq_map(struct iavf_adapter *adapter);
-int iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
- uint16_t index);
+int iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num);
void iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add);
int iavf_dev_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 6e216f4c0f..26e7febecf 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -919,20 +919,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
goto config_irq_map_err;
}
} else {
- uint16_t num_qv_maps = dev->data->nb_rx_queues;
- uint16_t index = 0;
-
- while (num_qv_maps > IAVF_IRQ_MAP_NUM_PER_BUF) {
- if (iavf_config_irq_map_lv(adapter,
- IAVF_IRQ_MAP_NUM_PER_BUF, index)) {
- PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
- goto config_irq_map_err;
- }
- num_qv_maps -= IAVF_IRQ_MAP_NUM_PER_BUF;
- index += IAVF_IRQ_MAP_NUM_PER_BUF;
- }
-
- if (iavf_config_irq_map_lv(adapter, num_qv_maps, index)) {
+ if (iavf_config_irq_map_lv(adapter, dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
goto config_irq_map_err;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index f0ab3b950b..dce4122410 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1314,81 +1314,112 @@ int
iavf_config_irq_map(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_irq_map_info *map_info;
- struct virtchnl_vector_map *vecmap;
- struct iavf_cmd_info args;
- int len, i, err;
+ struct {
+ struct virtchnl_irq_map_info map_info;
+ struct virtchnl_vector_map vecmap[IAVF_MAX_NUM_QUEUES_DFLT];
+ } map_req = {0};
+ struct virtchnl_irq_map_info *map_info = &map_req.map_info;
+ struct iavf_cmd_info args = {0};
+ int i, err, max_vmi = -1;
- len = sizeof(struct virtchnl_irq_map_info) +
- sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+ if (adapter->dev_data->nb_rx_queues > IAVF_MAX_NUM_QUEUES_DFLT) {
+ PMD_DRV_LOG(ERR, "number of queues (%u) exceeds the max supported (%u)",
+ adapter->dev_data->nb_rx_queues, IAVF_MAX_NUM_QUEUES_DFLT);
+ return -EINVAL;
+ }
- map_info = rte_zmalloc("map_info", len, 0);
- if (!map_info)
- return -ENOMEM;
-
- map_info->num_vectors = vf->nb_msix;
for (i = 0; i < adapter->dev_data->nb_rx_queues; i++) {
- vecmap =
- &map_info->vecmap[vf->qv_map[i].vector_id - vf->msix_base];
+ struct virtchnl_vector_map *vecmap;
+ /* always 0 for 1 MSIX, never bigger than rxq for multi MSIX */
+ uint16_t vmi = vf->qv_map[i].vector_id - vf->msix_base;
+
+ /* can't happen but avoid static analysis warnings */
+ if (vmi >= IAVF_MAX_NUM_QUEUES_DFLT) {
+ PMD_DRV_LOG(ERR, "vector id (%u) exceeds the max supported (%u)",
+ vf->qv_map[i].vector_id,
+ vf->msix_base + IAVF_MAX_NUM_QUEUES_DFLT - 1);
+ return -EINVAL;
+ }
+
+ vecmap = &map_info->vecmap[vmi];
vecmap->vsi_id = vf->vsi_res->vsi_id;
vecmap->rxitr_idx = IAVF_ITR_INDEX_DEFAULT;
vecmap->vector_id = vf->qv_map[i].vector_id;
vecmap->txq_map = 0;
vecmap->rxq_map |= 1 << vf->qv_map[i].queue_id;
+
+ /* MSIX vectors round robin so look for max */
+ if (vmi > max_vmi) {
+ map_info->num_vectors++;
+ max_vmi = vmi;
+ }
}
args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
args.in_args = (u8 *)map_info;
- args.in_args_size = len;
+ args.in_args_size = sizeof(map_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
- rte_free(map_info);
return err;
}
-int
-iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
- uint16_t index)
+static int
+iavf_config_irq_map_lv_chunk(struct iavf_adapter *adapter,
+ uint16_t chunk_sz,
+ uint16_t chunk_start)
{
+ struct {
+ struct virtchnl_queue_vector_maps map_info;
+ struct virtchnl_queue_vector qv_maps[IAVF_CFG_Q_NUM_PER_BUF];
+ } chunk_req = {0};
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_queue_vector_maps *map_info;
- struct virtchnl_queue_vector *qv_maps;
- struct iavf_cmd_info args;
- int len, i, err;
- int count = 0;
+ struct iavf_cmd_info args = {0};
+ struct virtchnl_queue_vector_maps *map_info = &chunk_req.map_info;
+ struct virtchnl_queue_vector *qv_maps = chunk_req.qv_maps;
+ uint16_t i;
- len = sizeof(struct virtchnl_queue_vector_maps) +
- sizeof(struct virtchnl_queue_vector) * (num - 1);
-
- map_info = rte_zmalloc("map_info", len, 0);
- if (!map_info)
- return -ENOMEM;
+ if (chunk_sz > IAVF_CFG_Q_NUM_PER_BUF)
+ return -EINVAL;
map_info->vport_id = vf->vsi_res->vsi_id;
- map_info->num_qv_maps = num;
- for (i = index; i < index + map_info->num_qv_maps; i++) {
- qv_maps = &map_info->qv_maps[count++];
+ map_info->num_qv_maps = chunk_sz;
+ for (i = 0; i < chunk_sz; i++) {
+ qv_maps = &map_info->qv_maps[i];
qv_maps->itr_idx = VIRTCHNL_ITR_IDX_0;
qv_maps->queue_type = VIRTCHNL_QUEUE_TYPE_RX;
- qv_maps->queue_id = vf->qv_map[i].queue_id;
- qv_maps->vector_id = vf->qv_map[i].vector_id;
+ qv_maps->queue_id = vf->qv_map[chunk_start + i].queue_id;
+ qv_maps->vector_id = vf->qv_map[chunk_start + i].vector_id;
}
args.ops = VIRTCHNL_OP_MAP_QUEUE_VECTOR;
args.in_args = (u8 *)map_info;
- args.in_args_size = len;
+ args.in_args_size = sizeof(chunk_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
- err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
- if (err)
- PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
- rte_free(map_info);
- return err;
+ return iavf_execute_vf_cmd_safe(adapter, &args, 0);
+}
+
+int
+iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num)
+{
+ uint16_t c;
+ int err;
+
+ for (c = 0; c < num; c += IAVF_CFG_Q_NUM_PER_BUF) {
+ uint16_t chunk_sz = RTE_MIN(num - c, IAVF_CFG_Q_NUM_PER_BUF);
+ err = iavf_config_irq_map_lv_chunk(adapter, chunk_sz, c);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to configure irq map chunk [%u, %u)",
+ c, c + chunk_sz);
+ return err;
+ }
+ }
+ return 0;
}
void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 23/26] net/ice: avoid rte malloc in RSS RETA operations
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (21 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 22/26] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 24/26] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
` (2 subsequent siblings)
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This memory does
not need to be stored in hugepage memory, so replace it with stack
allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
drivers/net/intel/ice/ice_ethdev.c | 29 ++++++--------------------
2 files changed, 8 insertions(+), 25 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index abd7875e7b..388495d69c 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -1336,7 +1336,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1356,7 +1356,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = ice_dcf_configure_rss_lut(hw);
if (ret) /* revert back */
rte_memcpy(hw->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_ethdev.c b/drivers/net/intel/ice/ice_ethdev.c
index 41474b7002..0d6b030536 100644
--- a/drivers/net/intel/ice/ice_ethdev.c
+++ b/drivers/net/intel/ice/ice_ethdev.c
@@ -5564,7 +5564,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K] = {0};
int ret;
if (reta_size != ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128 &&
@@ -5581,14 +5581,9 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
/* It MUST use the current LUT size to get the RSS lookup table,
* otherwise if will fail with -100 error code.
*/
- lut = rte_zmalloc(NULL, RTE_MAX(reta_size, lut_size), 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
ret = ice_get_rss_lut(pf->main_vsi, lut, lut_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
@@ -5604,10 +5599,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
pf->hash_lut_size = reta_size;
}
-out:
- rte_free(lut);
-
- return ret;
+ return 0;
}
static int
@@ -5618,7 +5610,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K] = {0};
int ret;
if (reta_size != lut_size) {
@@ -5630,15 +5622,9 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc(NULL, reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
-
ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
@@ -5647,10 +5633,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
reta_conf[idx].reta[shift] = lut[i];
}
-out:
- rte_free(lut);
-
- return ret;
+ return 0;
}
static int
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 24/26] net/ice: avoid rte malloc in MAC address operations
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (22 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 23/26] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 25/26] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 26/26] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This memory does not need
to be stored in hugepage memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 388495d69c..0d3599d7d0 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -926,19 +926,14 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
struct rte_ether_addr *mc_addrs,
uint32_t mc_addrs_num, bool add)
{
- struct virtchnl_ether_addr_list *list;
- struct dcf_virtchnl_cmd args;
+ struct {
+ struct virtchnl_ether_addr_list list;
+ struct virtchnl_ether_addr addr[DCF_NUM_MACADDR_MAX];
+ } list_req = {0};
+ struct virtchnl_ether_addr_list *list = &list_req.list;
+ struct dcf_virtchnl_cmd args = {0};
uint32_t i;
- int len, err = 0;
-
- len = sizeof(struct virtchnl_ether_addr_list);
- len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
-
- list = rte_zmalloc(NULL, len, 0);
- if (!list) {
- PMD_DRV_LOG(ERR, "fail to allocate memory");
- return -ENOMEM;
- }
+ int err = 0;
for (i = 0; i < mc_addrs_num; i++) {
memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
@@ -953,13 +948,12 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
VIRTCHNL_OP_DEL_ETH_ADDR;
args.req_msg = (uint8_t *)list;
- args.req_msglen = len;
+ args.req_msglen = sizeof(list_req);
err = ice_dcf_execute_virtchnl_cmd(hw, &args);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 25/26] net/ice: avoid rte malloc in raw pattern parsing
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (23 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 24/26] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 26/26] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when parsing raw flow patterns, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_fdir_filter.c | 14 +++++++-------
drivers/net/intel/ice/ice_hash.c | 10 +++++-----
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 1279823b12..0b92b9ab38 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -1970,13 +1970,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
pkt_len)
return -rte_errno;
- tmp_spec = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_spec = calloc(1, pkt_len / 2);
if (!tmp_spec)
return -rte_errno;
- tmp_mask = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_mask = calloc(1, pkt_len / 2);
if (!tmp_mask) {
- rte_free(tmp_spec);
+ free(tmp_spec);
return -rte_errno;
}
@@ -2041,13 +2041,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
filter->parser_ena = true;
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
break;
raw_error:
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
return ret_val;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index b20103a452..f9db530504 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -696,13 +696,13 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
pkt_len = spec_len / 2;
- pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+ pkt_buf = calloc(1, pkt_len);
if (!pkt_buf)
return -ENOMEM;
- msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+ msk_buf = calloc(1, pkt_len);
if (!msk_buf) {
- rte_free(pkt_buf);
+ free(pkt_buf);
return -ENOMEM;
}
@@ -753,8 +753,8 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
rte_memcpy(&meta->raw.prof, &prof, sizeof(prof));
free_mem:
- rte_free(pkt_buf);
- rte_free(msk_buf);
+ free(pkt_buf);
+ free(msk_buf);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v8 26/26] net/ice: avoid rte malloc in flow pattern match
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (24 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 25/26] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
@ 2026-02-24 12:23 ` Anatoly Burakov
25 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 12:23 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when allocating buffers for pattern match items and flow item
storage, we are using rte_zmalloc followed by immediate rte_free. This
memory does not need to be stored in hugepage memory, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_acl_filter.c | 3 ++-
drivers/net/intel/ice/ice_fdir_filter.c | 5 +++--
drivers/net/intel/ice/ice_generic_flow.c | 15 +++++++--------
drivers/net/intel/ice/ice_hash.c | 3 ++-
drivers/net/intel/ice/ice_switch_filter.c | 5 +++--
5 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c
index 38e30a4f62..6754a40044 100644
--- a/drivers/net/intel/ice/ice_acl_filter.c
+++ b/drivers/net/intel/ice/ice_acl_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1009,7 +1010,7 @@ ice_acl_parse(struct ice_adapter *ad,
*meta = filter;
error:
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 0b92b9ab38..93ab803b44 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -3,6 +3,7 @@
*/
#include <stdio.h>
+#include <stdlib.h>
#include <rte_flow.h>
#include <rte_hash.h>
#include <rte_hash_crc.h>
@@ -2845,11 +2846,11 @@ ice_fdir_parse(struct ice_adapter *ad,
rte_free(filter->pkt_buf);
}
- rte_free(item);
+ free(item);
return ret;
error:
rte_free(filter->pkt_buf);
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c
index 644958cccf..62f0c334a1 100644
--- a/drivers/net/intel/ice/ice_generic_flow.c
+++ b/drivers/net/intel/ice/ice_generic_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -2313,19 +2314,17 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
}
item_num++;
- items = rte_zmalloc("ice_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
+ items = calloc(item_num, sizeof(struct rte_flow_item));
if (!items) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
NULL, "No memory for PMD internal items.");
return NULL;
}
- pattern_match_item = rte_zmalloc("ice_pattern_match_item",
- sizeof(struct ice_pattern_match_item), 0);
+ pattern_match_item = calloc(1, sizeof(struct ice_pattern_match_item));
if (!pattern_match_item) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE,
NULL, "Failed to allocate memory.");
- rte_free(items);
+ free(items);
return NULL;
}
@@ -2344,7 +2343,7 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
pattern_match_item->pattern_list =
array[i].pattern_list;
pattern_match_item->meta = array[i].meta;
- rte_free(items);
+ free(items);
return pattern_match_item;
}
}
@@ -2352,8 +2351,8 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
unsupported:
rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- rte_free(items);
- rte_free(pattern_match_item);
+ free(items);
+ free(pattern_match_item);
return NULL;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index f9db530504..77829e607b 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -1236,7 +1237,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad,
*meta = rss_meta_ptr;
else
rte_free(rss_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c
index 28bc775a2c..b25e5eaad3 100644
--- a/drivers/net/intel/ice/ice_switch_filter.c
+++ b/drivers/net/intel/ice/ice_switch_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1877,14 +1878,14 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
rte_free(sw_meta_ptr);
}
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return 0;
error:
rte_free(list);
rte_free(sw_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return -rte_errno;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
` (18 preceding siblings ...)
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
` (26 more replies)
19 siblings, 27 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev
This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
IXGBE:
- Remove unnecessary macros and #ifdef's
- Disentangle unrelated flow API code paths
I40E:
- Get rid of global variables and unnecessary allocations
- Reduce code duplication around default RSS keys
- Use more appropriate integer types and definitions
IAVF:
- Remove dead code
- Remove unnecessary allocations
- Separate RSS uninit from hash flow parser uninit
ICE:
- Remove unnecessary allocations
This is done in preparation for further rework.
Note that this patchset depends on driver bug fix patchset [1] as well as an
IPsec struct fix [2] (both already integrated into next-net-intel).
[1] https://patches.dpdk.org/project/dpdk/list/?series=37350
[2] https://patches.dpdk.org/project/dpdk/patch/c87355f75826ec90a506dc8d4548e3f6af2b7e93.1771581658.git.anatoly.burakov@intel.com/
v1 -> v2:
- Added more cleanups around rte_malloc usage
v2 -> v3:
- Reworded some commit messages
- Added a new patch for ICE
- Rebased on latest bug fix patches
v3 -> v4:
- Rebased on latest bugfix patchset
v4 -> v5:
- Adjusted typing for queue size
- Fixed missing zero initializations for stack allocations
v5 -> v6:
- Addressed feedback for v3, v4, and v5
- Changed more allocations to be stack based
- Reworked queue and IRQ map related i40e patches for better logic
v6 -> v7:
- Fixed offset logic in IRQ map
- (Hopefully) fixed zero-sized array initialization error for MSVC
v7 -> v8:
- Reverted to using calloc for IPsec patch because MSVC doesn't like
stack-initialized structs with zero-sized array members
- Merged two IPsec patches together
v8 -> v9:
- Fixed testing failures in IAVF on account of "wrong" adminq buffer sizes
Anatoly Burakov (26):
net/ixgbe: remove MAC type check macros
net/ixgbe: remove security-related ifdefery
net/ixgbe: split security and ntuple filters
net/i40e: get rid of global filter variables
net/i40e: make default RSS key global
net/i40e: use unsigned types for queue comparisons
net/i40e: use proper flex len define
net/i40e: remove global pattern variable
net/i40e: avoid rte malloc in tunnel set
net/i40e: avoid rte malloc in RSS RETA operations
net/i40e: avoid rte malloc in MAC/VLAN filtering
net/i40e: avoid rte malloc in VF resource queries
net/i40e: avoid rte malloc in adminq operations
net/i40e: avoid rte malloc in DDP package handling
net/i40e: avoid rte malloc in DDP ptype handling
net/iavf: remove remnants of pipeline mode
net/iavf: decouple hash uninit from parser uninit
net/iavf: avoid rte malloc in VF mailbox for IPsec
net/iavf: avoid rte malloc in RSS configuration
net/iavf: avoid rte malloc in MAC address operations
net/iavf: avoid rte malloc in queue operations
net/iavf: avoid rte malloc in irq map config
net/ice: avoid rte malloc in RSS RETA operations
net/ice: avoid rte malloc in MAC address operations
net/ice: avoid rte malloc in raw pattern parsing
net/ice: avoid rte malloc in flow pattern match
drivers/net/intel/i40e/i40e_ethdev.c | 370 +++++++---------
drivers/net/intel/i40e/i40e_ethdev.h | 26 +-
drivers/net/intel/i40e/i40e_flow.c | 147 +++----
drivers/net/intel/i40e/i40e_hash.c | 27 +-
drivers/net/intel/i40e/i40e_hash.h | 3 +
drivers/net/intel/i40e/i40e_pf.c | 26 +-
drivers/net/intel/i40e/rte_pmd_i40e.c | 60 +--
drivers/net/intel/iavf/iavf.h | 7 +-
drivers/net/intel/iavf/iavf_ethdev.c | 37 +-
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 -
drivers/net/intel/iavf/iavf_hash.c | 13 +-
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 61 ++-
drivers/net/intel/iavf/iavf_vchnl.c | 432 ++++++++++---------
drivers/net/intel/ice/ice_acl_filter.c | 3 +-
drivers/net/intel/ice/ice_dcf_ethdev.c | 26 +-
drivers/net/intel/ice/ice_ethdev.c | 29 +-
drivers/net/intel/ice/ice_fdir_filter.c | 19 +-
drivers/net/intel/ice/ice_generic_flow.c | 15 +-
drivers/net/intel/ice/ice_hash.c | 13 +-
drivers/net/intel/ice/ice_switch_filter.c | 5 +-
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 -
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 16 -
drivers/net/intel/ixgbe/ixgbe_flow.c | 228 ++++++----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 -
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -
28 files changed, 740 insertions(+), 882 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 297+ messages in thread
* [PATCH v9 01/26] net/ixgbe: remove MAC type check macros
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 02/26] net/ixgbe: remove security-related ifdefery Anatoly Burakov
` (25 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The macros used were not informative and did not add any value beyond code
golf, so remove them and make MAC type checks explicit.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 12 ------------
drivers/net/intel/ixgbe/ixgbe_flow.c | 20 +++++++++++++++++---
2 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 5dbd659941..7dc02a472b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -137,18 +137,6 @@
#define IXGBE_MAX_FDIR_FILTER_NUM (1024 * 32)
#define IXGBE_MAX_L2_TN_FILTER_NUM 128
-#define MAC_TYPE_FILTER_SUP_EXT(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540)\
- return -ENOTSUP;\
-} while (0)
-
-#define MAC_TYPE_FILTER_SUP(type) do {\
- if ((type) != ixgbe_mac_82599EB && (type) != ixgbe_mac_X540 &&\
- (type) != ixgbe_mac_X550 && (type) != ixgbe_mac_X550EM_x &&\
- (type) != ixgbe_mac_X550EM_a && (type) != ixgbe_mac_E610)\
- return -ENOTSUP;\
-} while (0)
-
/* Link speed for X550 auto negotiation */
#define IXGBE_LINK_SPEED_X550_AUTONEG (IXGBE_LINK_SPEED_100_FULL | \
IXGBE_LINK_SPEED_1GB_FULL | \
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 6a7edc6377..c8d6237f27 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -654,7 +654,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP_EXT(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540)
+ return -ENOTSUP;
ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error);
@@ -894,7 +896,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_ethertype_filter(attr, pattern,
actions, filter, error);
@@ -1183,7 +1191,13 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- MAC_TYPE_FILTER_SUP(hw->mac.type);
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
ret = cons_parse_syn_filter(attr, pattern,
actions, filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 02/26] net/ixgbe: remove security-related ifdefery
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 03/26] net/ixgbe: split security and ntuple filters Anatoly Burakov
` (24 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
The security library is specified as explicit dependency for ixgbe, so
there is no more need to gate features behind #ifdef blocks that depend
on presence of this library.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 8 ------
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ---
drivers/net/intel/ixgbe/ixgbe_flow.c | 6 -----
drivers/net/intel/ixgbe/ixgbe_rxtx.c | 26 --------------------
drivers/net/intel/ixgbe/ixgbe_rxtx.h | 2 --
drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c | 6 -----
6 files changed, 52 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 11500a923c..57d929cf2c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -32,9 +32,7 @@
#include <rte_random.h>
#include <dev_driver.h>
#include <rte_hash_crc.h>
-#ifdef RTE_LIB_SECURITY
#include <rte_security_driver.h>
-#endif
#include <rte_os_shim.h>
#include "ixgbe_logs.h"
@@ -1177,11 +1175,9 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* Unlock any pending hardware semaphore */
ixgbe_swfw_lock_reset(hw);
-#ifdef RTE_LIB_SECURITY
/* Initialize security_ctx only for primary process*/
if (ixgbe_ipsec_ctx_create(eth_dev))
return -ENOMEM;
-#endif
/* Initialize DCB configuration*/
memset(dcb_config, 0, sizeof(struct ixgbe_dcb_config));
@@ -1362,10 +1358,8 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
rte_free(eth_dev->data->hash_mac_addrs);
eth_dev->data->hash_mac_addrs = NULL;
err_exit:
-#ifdef RTE_LIB_SECURITY
rte_free(eth_dev->security_ctx);
eth_dev->security_ctx = NULL;
-#endif
return ret;
}
@@ -3148,10 +3142,8 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(dev);
-#ifdef RTE_LIB_SECURITY
rte_free(dev->security_ctx);
dev->security_ctx = NULL;
-#endif
return ret;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 7dc02a472b..32d7b98ed1 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -14,9 +14,7 @@
#include "base/ixgbe_dcb_82599.h"
#include "base/ixgbe_dcb_82598.h"
#include "ixgbe_bypass.h"
-#ifdef RTE_LIB_SECURITY
#include "ixgbe_ipsec.h"
-#endif
#include <rte_flow.h>
#include <rte_time.h>
#include <rte_hash.h>
@@ -480,9 +478,7 @@ struct ixgbe_adapter {
struct ixgbe_filter_info filter;
struct ixgbe_l2_tn_info l2_tn;
struct ixgbe_bw_conf bw_conf;
-#ifdef RTE_LIB_SECURITY
struct ixgbe_ipsec ipsec;
-#endif
bool rx_bulk_alloc_allowed;
bool rx_vec_allowed;
struct rte_timecounter systime_tc;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index c8d6237f27..491e8bccc5 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,7 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
-#ifdef RTE_LIB_SECURITY
/**
* Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
*/
@@ -282,7 +281,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
}
-#endif
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -663,11 +661,9 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (filter->proto == IPPROTO_ESP)
return 0;
-#endif
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
@@ -3107,7 +3103,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
-#ifdef RTE_LIB_SECURITY
/* ESP flow not really a flow*/
if (ntuple_filter.proto == IPPROTO_ESP) {
if (ret != 0)
@@ -3115,7 +3110,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
flow->is_security = true;
return flow;
}
-#endif
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
index a6454cd1fe..3be0f0492a 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
@@ -460,7 +460,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
seqnum_seed |= tx_offload.l2_len
<< IXGBE_ADVTXD_TUNNEL_LEN;
}
-#ifdef RTE_LIB_SECURITY
if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
union ixgbe_crypto_tx_desc_md *md =
(union ixgbe_crypto_tx_desc_md *)mdata;
@@ -474,7 +473,6 @@ ixgbe_set_xmit_ctx(struct ci_tx_queue *txq,
tx_offload_mask.sa_idx |= ~0;
tx_offload_mask.sec_pad_len |= ~0;
}
-#endif
txq->ctx_cache[ctx_idx].flags = ol_flags;
txq->ctx_cache[ctx_idx].tx_offload.data[0] =
@@ -631,9 +629,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t ctx = 0;
uint32_t new_ctx;
union ixgbe_tx_offload tx_offload;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec;
-#endif
tx_offload.data[0] = 0;
tx_offload.data[1] = 0;
@@ -661,9 +657,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
-#ifdef RTE_LIB_SECURITY
use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
-#endif
/* If hardware offload required */
tx_ol_req = ol_flags & IXGBE_TX_OFFLOAD_MASK;
@@ -675,7 +669,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.tso_segsz = tx_pkt->tso_segsz;
tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
-#ifdef RTE_LIB_SECURITY
if (use_ipsec) {
union ixgbe_crypto_tx_desc_md *ipsec_mdata =
(union ixgbe_crypto_tx_desc_md *)
@@ -683,7 +676,6 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_offload.sa_idx = ipsec_mdata->sa_idx;
tx_offload.sec_pad_len = ipsec_mdata->pad_len;
}
-#endif
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -871,10 +863,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
-#ifdef RTE_LIB_SECURITY
if (use_ipsec)
olinfo_status |= IXGBE_ADVTXD_POPTS_IPSEC;
-#endif
m_seg = tx_pkt;
do {
@@ -1505,13 +1495,11 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
}
-#ifdef RTE_LIB_SECURITY
if (rx_status & IXGBE_RXD_STAT_SECP) {
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
}
-#endif
return pkt_flags;
}
@@ -2472,9 +2460,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
{
struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue;
if (txq->offloads == 0 &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST) {
if (txq->tx_rs_thresh <= IXGBE_TX_MAX_FREE_BUF_SZ &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 &&
@@ -2629,9 +2615,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq)
{
/* Use a simple Tx queue (no offloads, no multi segs) if possible */
if ((txq->offloads == 0) &&
-#ifdef RTE_LIB_SECURITY
!(txq->using_ipsec) &&
-#endif
(txq->tx_rs_thresh >= IXGBE_TX_MAX_BURST)) {
PMD_INIT_LOG(DEBUG, "Using simple tx code path");
dev->tx_pkt_prepare = rte_eth_tx_pkt_prepare_dummy;
@@ -2692,10 +2676,8 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
-#endif
return tx_offload_capa;
}
@@ -2873,10 +2855,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->offloads = offloads;
txq->ops = &def_txq_ops;
txq->tx_deferred_start = tx_conf->tx_deferred_start;
-#ifdef RTE_LIB_SECURITY
txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
RTE_ETH_TX_OFFLOAD_SECURITY);
-#endif
/*
* Modification to set VFTDT for virtual function if vf is detected
@@ -3170,10 +3150,8 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
hw->mac.type == ixgbe_mac_E610)
offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
-#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
-#endif
return offloads;
}
@@ -5101,10 +5079,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
struct ci_rx_queue *rxq = dev->data->rx_queues[i];
rxq->vector_rx = rx_using_sse;
-#ifdef RTE_LIB_SECURITY
rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY);
-#endif
}
}
@@ -5610,7 +5586,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
ixgbe_setup_loopback_link_x540_x550(hw, true);
}
-#ifdef RTE_LIB_SECURITY
if ((dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_SECURITY) ||
(dev->data->dev_conf.txmode.offloads &
@@ -5623,7 +5598,6 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
return ret;
}
}
-#endif
return 0;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.h b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
index 7950e56ee4..33023a3580 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.h
@@ -99,11 +99,9 @@ union ixgbe_tx_offload {
/* fields for TX offloading of tunnels */
uint64_t outer_l3_len:8; /**< Outer L3 (IP) Hdr Length. */
uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
-#ifdef RTE_LIB_SECURITY
/* inline ipsec related*/
uint64_t sa_idx:8; /**< TX SA database entry index */
uint64_t sec_pad_len:4; /**< padding length */
-#endif
};
};
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
index dca3a20ca0..3f37038e5c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -21,7 +21,6 @@ ixgbe_rxq_rearm(struct ci_rx_queue *rxq)
ci_rxq_rearm(rxq, CI_RX_VEC_LEVEL_SSE);
}
-#ifdef RTE_LIB_SECURITY
static inline void
desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
@@ -56,7 +55,6 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
*rearm2 = _mm_extract_epi32(rearm, 2);
*rearm3 = _mm_extract_epi32(rearm, 3);
}
-#endif
static inline void
desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
@@ -265,9 +263,7 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
volatile union ixgbe_adv_rx_desc *rxdp;
struct ci_rx_entry *sw_ring;
uint16_t nb_pkts_recd;
-#ifdef RTE_LIB_SECURITY
uint8_t use_ipsec = rxq->using_ipsec;
-#endif
int pos;
uint64_t var;
__m128i shuf_msk;
@@ -444,10 +440,8 @@ _recv_raw_pkts_vec(struct ci_rx_queue *rxq, struct rte_mbuf **rx_pkts,
desc_to_olflags_v(descs, mbuf_init, vlan_flags, udp_p_flag,
&rx_pkts[pos]);
-#ifdef RTE_LIB_SECURITY
if (unlikely(use_ipsec))
desc_to_olflags_v_ipsec(descs, &rx_pkts[pos]);
-#endif
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 03/26] net/ixgbe: split security and ntuple filters
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 02/26] net/ixgbe: remove security-related ifdefery Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 04/26] net/i40e: get rid of global filter variables Anatoly Burakov
` (23 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
These filters are mashed together even though they almost do not share any
code at all between each other. Separate security filter from ntuple filter
and parse it separately.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 202 ++++++++++++++++-----------
1 file changed, 122 insertions(+), 80 deletions(-)
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 491e8bccc5..01cd4f9bde 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -214,74 +214,6 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- act = next_no_void_action(actions, NULL);
- if (act->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- const void *conf = act->conf;
- const struct rte_flow_action_security *sec_act;
- struct rte_security_session *session;
- struct ip_spec spec;
-
- if (conf == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- act, "NULL security conf.");
- return -rte_errno;
- }
- /* check if the next not void item is END */
- act = next_no_void_action(actions, act);
- if (act->type != RTE_FLOW_ACTION_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "Not supported action.");
- return -rte_errno;
- }
-
- /* get the IP pattern*/
- item = next_no_void_pattern(pattern, NULL);
- while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- if (item->last ||
- item->type == RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "IP pattern missing.");
- return -rte_errno;
- }
- item = next_no_void_pattern(pattern, item);
- }
- if (item->spec == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
- "NULL IP pattern.");
- return -rte_errno;
- }
-
- filter->proto = IPPROTO_ESP;
- sec_act = (const struct rte_flow_action_security *)conf;
- spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
- if (spec.is_ipv6) {
- const struct rte_flow_item_ipv6 *ipv6 = item->spec;
- spec.spec.ipv6 = *ipv6;
- } else {
- const struct rte_flow_item_ipv4 *ipv4 = item->spec;
- spec.spec.ipv4 = *ipv4;
- }
-
- /*
- * we get pointer to security session from security action,
- * which is const. however, we do need to act on the session, so
- * either we do some kind of pointer based lookup to get session
- * pointer internally (which quickly gets unwieldy for lots of
- * flows case), or we simply cast away constness.
- */
- session = RTE_CAST_PTR(struct rte_security_session *, sec_act->security_session);
- return ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
- }
-
/* the first not void item can be MAC or IPv4 */
item = next_no_void_pattern(pattern, NULL);
@@ -640,6 +572,112 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
return 0;
}
+static int
+ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ const struct rte_flow_action_security *security;
+ struct rte_security_session *session;
+ const struct rte_flow_item *item;
+ const struct rte_flow_action *act;
+ struct ip_spec spec;
+ int ret;
+
+ if (hw->mac.type != ixgbe_mac_82599EB &&
+ hw->mac.type != ixgbe_mac_X540 &&
+ hw->mac.type != ixgbe_mac_X550 &&
+ hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_X550EM_a &&
+ hw->mac.type != ixgbe_mac_E610)
+ return -ENOTSUP;
+
+ if (pattern == NULL) {
+ rte_flow_error_set(error,
+ EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL, "NULL pattern.");
+ return -rte_errno;
+ }
+ if (actions == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "NULL action.");
+ return -rte_errno;
+ }
+ if (attr == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR,
+ NULL, "NULL attribute.");
+ return -rte_errno;
+ }
+
+ /* check if next non-void action is security */
+ act = next_no_void_action(actions, NULL);
+ if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+ security = act->conf;
+ if (security == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "NULL security action config.");
+ }
+ /* check if the next not void item is END */
+ act = next_no_void_action(actions, act);
+ if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ act, "Not supported action.");
+ }
+
+ /* get the IP pattern*/
+ item = next_no_void_pattern(pattern, NULL);
+ while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+ if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "IP pattern missing.");
+ return -rte_errno;
+ }
+ item = next_no_void_pattern(pattern, item);
+ }
+ if (item->spec == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
+ "NULL IP pattern.");
+ return -rte_errno;
+ }
+ spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
+ if (spec.is_ipv6) {
+ const struct rte_flow_item_ipv6 *ipv6 = item->spec;
+ spec.spec.ipv6 = *ipv6;
+ } else {
+ const struct rte_flow_item_ipv4 *ipv4 = item->spec;
+ spec.spec.ipv4 = *ipv4;
+ }
+
+ /*
+ * we get pointer to security session from security action, which is
+ * const. however, we do need to act on the session, so either we do
+ * some kind of pointer based lookup to get session pointer internally
+ * (which quickly gets unwieldy for lots of flows case), or we simply
+ * cast away constness. the latter path was chosen.
+ */
+ session = RTE_CAST_PTR(struct rte_security_session *, security->security_session);
+ ret = ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
+ if (ret) {
+ rte_flow_error_set(error, -ret,
+ RTE_FLOW_ERROR_TYPE_ACTION, act,
+ "Failed to add security session.");
+ return -rte_errno;
+ }
+ return 0;
+}
+
/* a specific function for ixgbe because the flags is specific */
static int
ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
@@ -661,10 +699,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- /* ESP flow not really a flow*/
- if (filter->proto == IPPROTO_ESP)
- return 0;
-
/* Ixgbe doesn't support tcp flags. */
if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
@@ -3099,18 +3133,19 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
-
- /* ESP flow not really a flow*/
- if (ntuple_filter.proto == IPPROTO_ESP) {
- if (ret != 0)
- goto out;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret) {
flow->is_security = true;
return flow;
}
+ memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
+ ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
+ actions, &ntuple_filter, error);
+
if (!ret) {
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
if (!ret) {
@@ -3334,6 +3369,13 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
+ /**
+ * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
+ */
+ ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
+ if (!ret)
+ return 0;
+
memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
actions, &ntuple_filter, error);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 04/26] net/i40e: get rid of global filter variables
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (2 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 03/26] net/ixgbe: split security and ntuple filters Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 05/26] net/i40e: make default RSS key global Anatoly Burakov
` (22 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, i40e driver relies on global state to work around the fact that
`rte_flow_validate()` is being called directly from `rte_flow_create()`,
and it not being possible to pass state between two functions. Fix that by
making a small wrapper around validation that will create a dummy context.
Additionally, tunnel filter doesn't appear to be used by anything and so is
omitted from the structure.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 16 ++--
drivers/net/intel/i40e/i40e_flow.c | 117 ++++++++++++++-------------
2 files changed, 68 insertions(+), 65 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index cab6d7e9dc..0de036f2d9 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1304,12 +1304,14 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
-union i40e_filter_t {
- struct rte_eth_ethertype_filter ethertype_filter;
- struct i40e_fdir_filter_conf fdir_filter;
- struct rte_eth_tunnel_filter_conf tunnel_filter;
- struct i40e_tunnel_filter_conf consistent_tunnel_filter;
- struct i40e_rte_flow_rss_conf rss_conf;
+struct i40e_filter_ctx {
+ union {
+ struct rte_eth_ethertype_filter ethertype_filter;
+ struct i40e_fdir_filter_conf fdir_filter;
+ struct i40e_tunnel_filter_conf consistent_tunnel_filter;
+ struct i40e_rte_flow_rss_conf rss_conf;
+ };
+ enum rte_filter_type type;
};
typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
@@ -1317,7 +1319,7 @@ typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
struct i40e_valid_pattern {
enum rte_flow_item_type *items;
parse_filter_t parse_filter;
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2374b9bbca..e611de0c06 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -80,37 +80,37 @@ static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
struct i40e_ethertype_filter *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
@@ -124,7 +124,7 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
static int
i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
@@ -136,7 +136,7 @@ static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter);
+ struct i40e_filter_ctx *filter);
const struct rte_flow_ops i40e_flow_ops = {
.validate = i40e_flow_validate,
.create = i40e_flow_create,
@@ -145,8 +145,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-static union i40e_filter_t cons_filter;
-static enum rte_filter_type cons_filter_type = RTE_ETH_FILTER_NONE;
/* internal pattern w/o VOID items */
struct rte_flow_item g_items[32];
@@ -1454,10 +1452,9 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct rte_eth_ethertype_filter *ethertype_filter =
- &filter->ethertype_filter;
+ struct rte_eth_ethertype_filter *ethertype_filter = &filter->ethertype_filter;
int ret;
ret = i40e_flow_parse_ethertype_pattern(dev, pattern, error,
@@ -1474,7 +1471,7 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_ETHERTYPE;
+ filter->type = RTE_ETH_FILTER_ETHERTYPE;
return ret;
}
@@ -2549,7 +2546,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
struct i40e_fdir_filter_conf *fdir_filter = &filter->fdir_filter;
int ret;
@@ -2566,7 +2563,7 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_FDIR;
+ filter->type = RTE_ETH_FILTER_FDIR;
return 0;
}
@@ -2834,10 +2831,9 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_l4_pattern(pattern, error, tunnel_filter);
@@ -2852,7 +2848,7 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3086,10 +3082,9 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_vxlan_pattern(dev, pattern,
@@ -3105,7 +3100,7 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3338,10 +3333,9 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_nvgre_pattern(dev, pattern,
@@ -3357,7 +3351,7 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3495,10 +3489,9 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_mpls_pattern(dev, pattern,
@@ -3514,7 +3507,7 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3648,10 +3641,9 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_gtp_pattern(dev, pattern,
@@ -3667,7 +3659,7 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
@@ -3766,10 +3758,9 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
struct rte_flow_error *error,
- union i40e_filter_t *filter)
+ struct i40e_filter_ctx *filter)
{
- struct i40e_tunnel_filter_conf *tunnel_filter =
- &filter->consistent_tunnel_filter;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
int ret;
ret = i40e_flow_parse_qinq_pattern(dev, pattern,
@@ -3785,16 +3776,17 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_TUNNEL;
+ filter->type = RTE_ETH_FILTER_TUNNEL;
return ret;
}
static int
-i40e_flow_validate(struct rte_eth_dev *dev,
+i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
+ struct i40e_filter_ctx *filter_ctx,
struct rte_flow_error *error)
{
struct rte_flow_item *items; /* internal pattern w/o VOID items */
@@ -3823,7 +3815,6 @@ i40e_flow_validate(struct rte_eth_dev *dev,
NULL, "NULL attribute.");
return -rte_errno;
}
- memset(&cons_filter, 0, sizeof(cons_filter));
/* Get the non-void item of action */
while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID)
@@ -3834,9 +3825,8 @@ i40e_flow_validate(struct rte_eth_dev *dev,
if (ret)
return ret;
- cons_filter_type = RTE_ETH_FILTER_HASH;
- return i40e_hash_parse(dev, pattern, actions + i,
- &cons_filter.rss_conf, error);
+ filter_ctx->type = RTE_ETH_FILTER_HASH;
+ return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error);
}
i = 0;
@@ -3878,8 +3868,7 @@ i40e_flow_validate(struct rte_eth_dev *dev,
}
if (parse_filter)
- ret = parse_filter(dev, attr, items, actions,
- error, &cons_filter);
+ ret = parse_filter(dev, attr, items, actions, error, filter_ctx);
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
@@ -3890,6 +3879,19 @@ i40e_flow_validate(struct rte_eth_dev *dev,
return ret;
}
+static int
+i40e_flow_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ /* creates dummy context */
+ struct i40e_filter_ctx filter_ctx = {0};
+
+ return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
+}
+
static struct rte_flow *
i40e_flow_create(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -3898,15 +3900,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_filter_ctx filter_ctx = {0};
struct rte_flow *flow = NULL;
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
- ret = i40e_flow_validate(dev, attr, pattern, actions, error);
+ ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
if (ret < 0)
return NULL;
- if (cons_filter_type == RTE_ETH_FILTER_FDIR) {
+ if (filter_ctx.type == RTE_ETH_FILTER_FDIR) {
/* if this is the first time we're creating an fdir flow */
if (pf->fdir.fdir_vsi == NULL) {
ret = i40e_fdir_setup(pf);
@@ -3947,18 +3950,16 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
}
- switch (cons_filter_type) {
+ switch (filter_ctx.type) {
case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_set(pf,
- &cons_filter.ethertype_filter, 1);
+ ret = i40e_ethertype_filter_set(pf, &filter_ctx.ethertype_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->ethertype.ethertype_list,
i40e_ethertype_filter_list);
break;
case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev,
- &cons_filter.fdir_filter, 1);
+ ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
@@ -3966,14 +3967,14 @@ i40e_flow_create(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_dev_consistent_tunnel_filter_set(pf,
- &cons_filter.consistent_tunnel_filter, 1);
+ &filter_ctx.consistent_tunnel_filter, 1);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->tunnel.tunnel_list,
i40e_tunnel_filter_list);
break;
case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_create(pf, &cons_filter.rss_conf);
+ ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
if (ret)
goto free_flow;
flow->rule = TAILQ_LAST(&pf->rss_config_list,
@@ -3983,7 +3984,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
goto free_flow;
}
- flow->filter_type = cons_filter_type;
+ flow->filter_type = filter_ctx.type;
TAILQ_INSERT_TAIL(&pf->flow_list, flow, node);
return flow;
@@ -3992,7 +3993,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
"Failed to create flow.");
- if (cons_filter_type != RTE_ETH_FILTER_FDIR)
+ if (filter_ctx.type != RTE_ETH_FILTER_FDIR)
rte_free(flow);
else
i40e_fdir_entry_pool_put(fdir_info, flow);
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 05/26] net/i40e: make default RSS key global
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (3 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 04/26] net/i40e: get rid of global filter variables Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 06/26] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
` (21 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, there are multiple places where we need a default RSS key, but
in each of those places we define it as a local variable. Make it global
constant, and adjust all callers to use the global constant. When dealing
with adminq, we cannot send down the constant because adminq commands do
not guarantee const-ness, so copy RSS key into a local buffer before
sending it down to hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 22 ++++++++++------------
drivers/net/intel/i40e/i40e_hash.c | 23 +++++++++++++++++------
drivers/net/intel/i40e/i40e_hash.h | 3 +++
3 files changed, 30 insertions(+), 18 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index b891215191..9ab8c35621 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -9080,23 +9080,21 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
int
i40e_pf_reset_rss_key(struct i40e_pf *pf)
{
- const uint8_t key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
- sizeof(uint32_t);
- uint8_t *rss_key;
+ uint8_t key_buf[I40E_RSS_KEY_LEN];
+ const uint8_t *rss_key;
/* Reset key */
rss_key = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key;
- if (!rss_key ||
- pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < key_len) {
- static uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
+ if (!rss_key || pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_key_len < sizeof(key_buf))
+ rss_key = i40e_rss_key_default;
- rss_key = (uint8_t *)rss_key_default;
- }
+ /*
+ * adminq does not guarantee const-ness of RSS key once a command is sent down, so make a
+ * local copy.
+ */
+ memcpy(&key_buf, rss_key, sizeof(key_buf));
- return i40e_set_rss_key(pf->main_vsi, rss_key, key_len);
+ return i40e_set_rss_key(pf->main_vsi, key_buf, sizeof(key_buf));
}
static int
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index 3149682197..f20b40e7d0 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -233,6 +233,22 @@ struct i40e_hash_match_pattern {
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+const uint8_t i40e_rss_key_default[] = {
+ 0x44, 0x39, 0x79, 0x6b,
+ 0xb5, 0x4c, 0x50, 0x23,
+ 0xb6, 0x75, 0xea, 0x5b,
+ 0x12, 0x4f, 0x9f, 0x30,
+ 0xb8, 0xa2, 0xc0, 0x3d,
+ 0xdf, 0xdc, 0x4d, 0x02,
+ 0xa0, 0x8c, 0x9b, 0x33,
+ 0x4a, 0xf6, 0x4a, 0x4c,
+ 0x05, 0xc6, 0xfa, 0x34,
+ 0x39, 0x58, 0xd8, 0x55,
+ 0x7d, 0x99, 0x58, 0x3a,
+ 0xe1, 0x38, 0xc9, 0x2e,
+ 0x81, 0x15, 0x03, 0x66
+};
+
/* Current supported patterns and RSS types.
* All items that have the same pattern types are together.
*/
@@ -910,17 +926,12 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act,
const uint8_t *key = rss_act->key;
if (!key || rss_act->key_len != sizeof(rss_conf->key)) {
- const uint32_t rss_key_default[] = {0x6b793944,
- 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8,
- 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605,
- 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581};
-
if (rss_act->key_len != sizeof(rss_conf->key))
PMD_DRV_LOG(WARNING,
"RSS key length invalid, must be %u bytes, now set key to default",
(uint32_t)sizeof(rss_conf->key));
- memcpy(rss_conf->key, rss_key_default, sizeof(rss_conf->key));
+ memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key));
} else {
memcpy(rss_conf->key, key, sizeof(rss_conf->key));
}
diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h
index ff8c91c030..2513d84565 100644
--- a/drivers/net/intel/i40e/i40e_hash.h
+++ b/drivers/net/intel/i40e/i40e_hash.h
@@ -27,6 +27,9 @@ int i40e_hash_filter_destroy(struct i40e_pf *pf,
const struct i40e_rss_filter *rss_filter);
int i40e_hash_filter_flush(struct i40e_pf *pf);
+#define I40E_RSS_KEY_LEN ((I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t))
+extern const uint8_t i40e_rss_key_default[I40E_RSS_KEY_LEN];
+
#ifdef __cplusplus
}
#endif
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 06/26] net/i40e: use unsigned types for queue comparisons
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (4 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 05/26] net/i40e: make default RSS key global Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 07/26] net/i40e: use proper flex len define Anatoly Burakov
` (20 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when we compare queue numbers against maximum traffic class
value of 64, we do not use unsigned values, which results in compiler
warning when attempting to compare `I40E_MAX_Q_PER_TC` to an unsigned
value. Make it unsigned 16-bit, and adjust callers to use correct types.
As a consequence, `i40e_align_floor` now returns unsigned value as well -
this is correct, because nothing about that function implies signed usage
being a valid use case.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 11 ++++++-----
drivers/net/intel/i40e/i40e_ethdev.h | 8 ++++----
drivers/net/intel/i40e/i40e_hash.c | 4 ++--
3 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 9ab8c35621..27fa789e21 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8976,11 +8976,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
}
/* Calculate the maximum number of contiguous PF queues that are configured */
-int
+uint16_t
i40e_pf_calc_configured_queues_num(struct i40e_pf *pf)
{
struct rte_eth_dev_data *data = pf->dev_data;
- int i, num;
+ int i;
+ uint16_t num;
struct ci_rx_queue *rxq;
num = 0;
@@ -9056,7 +9057,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
struct i40e_hw *hw = &pf->adapter->hw;
uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
uint32_t i;
- int num;
+ uint16_t num;
/* If both VMDQ and RSS enabled, not all of PF queues are
* configured. It's necessary to calculate the actual PF
@@ -9072,7 +9073,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
return 0;
for (i = 0; i < hw->func_caps.rss_table_size; i++)
- lut[i] = (uint8_t)(i % (uint32_t)num);
+ lut[i] = (uint8_t)(i % num);
return i40e_set_rss_lut(pf->main_vsi, lut, (uint16_t)i);
}
@@ -10769,7 +10770,7 @@ i40e_vsi_update_queue_mapping(struct i40e_vsi *vsi,
PMD_INIT_LOG(ERR, " number of queues is less that tcs.");
return I40E_ERR_INVALID_QP_ID;
}
- qpnum_per_tc = RTE_MIN(i40e_align_floor(qpnum_per_tc),
+ qpnum_per_tc = RTE_MIN((uint16_t)i40e_align_floor(qpnum_per_tc),
I40E_MAX_Q_PER_TC);
bsf = rte_bsf32(qpnum_per_tc);
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 0de036f2d9..ca6638b32c 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -24,7 +24,7 @@
#define I40E_AQ_LEN 32
#define I40E_AQ_BUF_SZ 4096
/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
-#define I40E_MAX_Q_PER_TC 64
+#define I40E_MAX_Q_PER_TC UINT16_C(64)
#define I40E_NUM_DESC_DEFAULT 512
#define I40E_NUM_DESC_ALIGN 32
#define I40E_BUF_SIZE_MIN 1024
@@ -1456,7 +1456,7 @@ int i40e_flush_queue_region_all_conf(struct rte_eth_dev *dev,
void i40e_init_queue_region_conf(struct rte_eth_dev *dev);
void i40e_flex_payload_reg_set_default(struct i40e_hw *hw);
void i40e_pf_disable_rss(struct i40e_pf *pf);
-int i40e_pf_calc_configured_queues_num(struct i40e_pf *pf);
+uint16_t i40e_pf_calc_configured_queues_num(struct i40e_pf *pf);
int i40e_pf_reset_rss_reta(struct i40e_pf *pf);
int i40e_pf_reset_rss_key(struct i40e_pf *pf);
int i40e_pf_config_rss(struct i40e_pf *pf);
@@ -1517,8 +1517,8 @@ i40e_init_adminq_parameter(struct i40e_hw *hw)
hw->aq.asq_buf_size = I40E_AQ_BUF_SZ;
}
-static inline int
-i40e_align_floor(int n)
+static inline uint32_t
+i40e_align_floor(uint32_t n)
{
if (n == 0)
return 0;
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index f20b40e7d0..5756ebf255 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -949,7 +949,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
struct i40e_pf *pf;
struct i40e_hw *hw;
uint16_t i;
- int max_queue;
+ uint16_t max_queue;
hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (!rss_act->queue_num ||
@@ -971,7 +971,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
for (i = 0; i < rss_act->queue_num; i++) {
- if ((int)rss_act->queue[i] >= max_queue)
+ if (rss_act->queue[i] >= max_queue)
break;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 07/26] net/i40e: use proper flex len define
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (5 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 06/26] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 08/26] net/i40e: remove global pattern variable Anatoly Burakov
` (19 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
For FDIR, we have byte arrays that are supposed to be limited to whatever
the HW supports in terms of flex descriptor matching. However, in the
structure definition, spec and mask bytes are using different array length
defines, and the only reason why this works is because they evaluate to the
same value.
Use the i40e-specific definition instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index ca6638b32c..d57c53f661 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -631,7 +631,7 @@ struct i40e_fdir_flex_pit {
/* A structure used to contain extend input of flow */
struct i40e_fdir_flow_ext {
uint16_t vlan_tci;
- uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
+ uint8_t flexbytes[I40E_FDIR_MAX_FLEX_LEN];
/* It is filled by the flexible payload to match. */
uint8_t flex_mask[I40E_FDIR_MAX_FLEX_LEN];
uint8_t raw_id;
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 08/26] net/i40e: remove global pattern variable
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (6 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 07/26] net/i40e: use proper flex len define Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 09/26] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
` (18 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
When parsing flow patterns, current code cleans up the pattern list by
removing void flow items, and copies the patterns into an array. That
array, when dealing with under 32 flow items, is allocated on the stack,
but when the pattern is big enough, a new list is dynamically allocated.
This allocated list is a global variable, and is allocated using
rte_zmalloc call which seems like overkill for this use case.
Remove the global variable, and replace the split behavior with
unconditional allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 30 ++++++++++--------------------
1 file changed, 10 insertions(+), 20 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index e611de0c06..2791139e59 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -145,9 +146,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-/* internal pattern w/o VOID items */
-struct rte_flow_item g_items[32];
-
/* Pattern matched ethertype filter */
static enum rte_flow_item_type pattern_ethertype[] = {
RTE_FLOW_ITEM_TYPE_ETH,
@@ -3837,19 +3835,13 @@ i40e_flow_check(struct rte_eth_dev *dev,
i++;
}
item_num++;
-
- if (item_num <= ARRAY_SIZE(g_items)) {
- items = g_items;
- } else {
- items = rte_zmalloc("i40e_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
- if (!items) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL,
- "No memory for PMD internal items.");
- return -ENOMEM;
- }
+ items = calloc(item_num, sizeof(struct rte_flow_item));
+ if (items == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+ NULL,
+ "No memory for PMD internal items.");
+ return -ENOMEM;
}
i40e_pattern_skip_void_item(items, pattern);
@@ -3862,8 +3854,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- if (items != g_items)
- rte_free(items);
+ free(items);
return -rte_errno;
}
@@ -3873,8 +3864,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
flag = true;
} while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
- if (items != g_items)
- rte_free(items);
+ free(items);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 09/26] net/i40e: avoid rte malloc in tunnel set
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (7 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 08/26] net/i40e: remove global pattern variable Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 10/26] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (17 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when setting tunnel configuration, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory and the allocation size is pretty small, so replace it
with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 124 ++++++++++++---------------
1 file changed, 53 insertions(+), 71 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 27fa789e21..f9e86b82b7 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -8509,38 +8509,27 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
struct i40e_pf_vf *vf = NULL;
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_vsi *vsi;
- struct i40e_aqc_cloud_filters_element_bb *cld_filter;
- struct i40e_aqc_cloud_filters_element_bb *pfilter;
+ struct i40e_aqc_cloud_filters_element_bb cld_filter = {0};
struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *tunnel, *node;
+ struct i40e_tunnel_filter *node;
struct i40e_tunnel_filter check_filter; /* Check if filter exists */
uint32_t teid_le;
bool big_buffer = 0;
- cld_filter = rte_zmalloc("tunnel_filter",
- sizeof(struct i40e_aqc_add_rm_cloud_filt_elem_ext),
- 0);
-
- if (cld_filter == NULL) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- return -ENOMEM;
- }
- pfilter = cld_filter;
-
rte_ether_addr_copy(&tunnel_filter->outer_mac,
- (struct rte_ether_addr *)&pfilter->element.outer_mac);
+ (struct rte_ether_addr *)&cld_filter.element.outer_mac);
rte_ether_addr_copy(&tunnel_filter->inner_mac,
- (struct rte_ether_addr *)&pfilter->element.inner_mac);
+ (struct rte_ether_addr *)&cld_filter.element.inner_mac);
- pfilter->element.inner_vlan =
+ cld_filter.element.inner_vlan =
rte_cpu_to_le_16(tunnel_filter->inner_vlan);
if (tunnel_filter->ip_type == I40E_TUNNEL_IPTYPE_IPV4) {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v4.data,
&ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
+ sizeof(cld_filter.element.ipaddr.v4.data));
} else {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
for (i = 0; i < 4; i++) {
@@ -8548,9 +8537,9 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
rte_cpu_to_le_32(rte_be_to_cpu_32(
tunnel_filter->ip_addr.ipv6_addr[i]));
}
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
+ rte_memcpy(&cld_filter.element.ipaddr.v6.data,
&convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
+ sizeof(cld_filter.element.ipaddr.v6.data));
}
/* check tunneled type */
@@ -8571,11 +8560,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x40;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOUDP;
@@ -8587,11 +8576,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->mpls_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
teid_le >> 4;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
(teid_le & 0xF) << 12;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
0x0;
big_buffer = 1;
tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_MPLSOGRE;
@@ -8603,11 +8592,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8618,11 +8607,11 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->gtp_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0] =
(teid_le >> 16) & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1] =
teid_le & 0xFFFF;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2] =
0x0;
big_buffer = 1;
break;
@@ -8639,8 +8628,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
* Big Buffer should be set, see changes in
* i40e_aq_add_cloud_filters
*/
- pfilter->general_fields[0] = tunnel_filter->inner_vlan;
- pfilter->general_fields[1] = tunnel_filter->outer_vlan;
+ cld_filter.general_fields[0] = tunnel_filter->inner_vlan;
+ cld_filter.general_fields[1] = tunnel_filter->outer_vlan;
big_buffer = 1;
break;
case I40E_CLOUD_TYPE_UDP:
@@ -8655,20 +8644,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->sport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
} else {
@@ -8680,20 +8669,20 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
pf->dport_replace_flag = 1;
}
teid_le = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0] =
I40E_DIRECTION_INGRESS_KEY;
if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_UDP;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP)
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_TCP;
else
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1] =
I40E_TR_L4_TYPE_SCTP;
- pfilter->general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
+ cld_filter.general_fields[I40E_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2] =
(teid_le >> 16) & 0xFFFF;
big_buffer = 1;
}
@@ -8702,48 +8691,46 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
default:
/* Other tunnel types is not supported. */
PMD_DRV_LOG(ERR, "tunnel type is not supported.");
- rte_free(cld_filter);
return -EINVAL;
}
if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoUDP)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_MPLSoGRE)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPC)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_GTPU)
- pfilter->element.flags =
+ cld_filter.element.flags =
I40E_AQC_ADD_CLOUD_FILTER_0X12;
else if (tunnel_filter->tunnel_type == I40E_TUNNEL_TYPE_QINQ)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
else if (tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_UDP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_TCP ||
tunnel_filter->tunnel_type == I40E_CLOUD_TYPE_SCTP) {
if (tunnel_filter->l4_port_type == I40E_L4_PORT_TYPE_SRC)
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X11;
else
- pfilter->element.flags |=
+ cld_filter.element.flags |=
I40E_AQC_ADD_CLOUD_FILTER_0X10;
} else {
val = i40e_dev_get_filter_type(tunnel_filter->filter_type,
- &pfilter->element.flags);
+ &cld_filter.element.flags);
if (val < 0) {
- rte_free(cld_filter);
return -EINVAL;
}
}
- pfilter->element.flags |= rte_cpu_to_le_16(
- I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE |
- ip_type | (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
- pfilter->element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
- pfilter->element.queue_number =
+ cld_filter.element.flags |=
+ rte_cpu_to_le_16(I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type |
+ (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT));
+ cld_filter.element.tenant_id = rte_cpu_to_le_32(tunnel_filter->tenant_id);
+ cld_filter.element.queue_number =
rte_cpu_to_le_16(tunnel_filter->queue_id);
if (!tunnel_filter->is_to_vf)
@@ -8751,7 +8738,6 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
else {
if (tunnel_filter->vf_id >= pf->vf_num) {
PMD_DRV_LOG(ERR, "Invalid argument.");
- rte_free(cld_filter);
return -EINVAL;
}
vf = &pf->vfs[tunnel_filter->vf_id];
@@ -8760,38 +8746,36 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
/* Check if there is the filter in SW list */
memset(&check_filter, 0, sizeof(check_filter));
- i40e_tunnel_filter_convert(cld_filter, &check_filter);
+ i40e_tunnel_filter_convert(&cld_filter, &check_filter);
check_filter.is_to_vf = tunnel_filter->is_to_vf;
check_filter.vf_id = tunnel_filter->vf_id;
node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &check_filter.input);
if (add && node) {
PMD_DRV_LOG(ERR, "Conflict with existing tunnel rules!");
- rte_free(cld_filter);
return -EINVAL;
}
if (!add && !node) {
PMD_DRV_LOG(ERR, "There's no corresponding tunnel filter!");
- rte_free(cld_filter);
return -EINVAL;
}
if (add) {
+ struct i40e_tunnel_filter *tunnel;
+
if (big_buffer)
ret = i40e_aq_add_cloud_filters_bb(hw,
- vsi->seid, cld_filter, 1);
+ vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_add_cloud_filters(hw,
- vsi->seid, &cld_filter->element, 1);
+ vsi->seid, &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to add a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
tunnel = rte_zmalloc("tunnel_filter", sizeof(*tunnel), 0);
if (tunnel == NULL) {
PMD_DRV_LOG(ERR, "Failed to alloc memory.");
- rte_free(cld_filter);
return -ENOMEM;
}
@@ -8802,19 +8786,17 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
} else {
if (big_buffer)
ret = i40e_aq_rem_cloud_filters_bb(
- hw, vsi->seid, cld_filter, 1);
+ hw, vsi->seid, &cld_filter, 1);
else
ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter->element, 1);
+ &cld_filter.element, 1);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to delete a tunnel filter.");
- rte_free(cld_filter);
return -ENOTSUP;
}
ret = i40e_sw_tunnel_filter_del(pf, &node->input);
}
- rte_free(cld_filter);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 10/26] net/i40e: avoid rte malloc in RSS RETA operations
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (8 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 09/26] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
` (16 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This memory does
not need to be stored in hugepage memory and the maximum LUT size is 512
bytes, so replace it with stack-based allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 25 ++++---------------------
1 file changed, 4 insertions(+), 21 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index f9e86b82b7..ba66f9e3fd 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -4619,7 +4619,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512] = {0};
int ret;
if (reta_size != lut_size ||
@@ -4630,14 +4630,9 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
ret = i40e_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
shift = i % RTE_ETH_RETA_GROUP_SIZE;
@@ -4648,9 +4643,6 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
pf->adapter->rss_reta_updated = 1;
-out:
- rte_free(lut);
-
return ret;
}
@@ -4662,7 +4654,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512] = {0};
int ret;
if (reta_size != lut_size ||
@@ -4673,15 +4665,9 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("i40e_rss_lut", reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
-
ret = i40e_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
shift = i % RTE_ETH_RETA_GROUP_SIZE;
@@ -4689,9 +4675,6 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
reta_conf[idx].reta[shift] = lut[i];
}
-out:
- rte_free(lut);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (9 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 10/26] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 12/26] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
` (15 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding, removing, or configuring MAC and VLAN filters,
we are using rte_zmalloc followed by an immediate rte_free. This memory
does not need to be stored in hugepage memory, so replace it with regular
malloc/free or stack allocation where appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 135 +++++++++++---------------
drivers/net/intel/i40e/rte_pmd_i40e.c | 17 ++--
2 files changed, 65 insertions(+), 87 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index ba66f9e3fd..672d337d99 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6,6 +6,7 @@
#include <stdio.h>
#include <errno.h>
#include <stdint.h>
+#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
@@ -4128,7 +4129,6 @@ static int
i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_mac_filter_info *mac_filter;
struct i40e_vsi *vsi = pf->main_vsi;
struct rte_eth_rxmode *rxmode;
struct i40e_mac_filter *f;
@@ -4163,12 +4163,12 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
}
if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ struct i40e_mac_filter_info mac_filter[I40E_NUM_MACADDR_MAX] = {0};
i = 0;
num = vsi->mac_num;
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
- if (mac_filter == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
+
+ if (num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Too many MAC addresses");
return I40E_ERR_NO_MEMORY;
}
@@ -4206,7 +4206,6 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (ret)
PMD_DRV_LOG(ERR, "i40e vsi add mac fail.");
}
- rte_free(mac_filter);
}
if (mask & RTE_ETH_QINQ_STRIP_MASK) {
@@ -6198,7 +6197,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
int i, num;
struct i40e_mac_filter *f;
void *temp;
- struct i40e_mac_filter_info *mac_filter;
+ struct i40e_mac_filter_info mac_filter[I40E_NUM_MACADDR_MAX] = {0};
enum i40e_mac_filter_type desired_filter;
int ret = I40E_SUCCESS;
@@ -6211,12 +6210,9 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
}
num = vsi->mac_num;
-
- mac_filter = rte_zmalloc("mac_filter_info_data",
- num * sizeof(*mac_filter), 0);
- if (mac_filter == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Too many MAC addresses");
+ return -1;
}
i = 0;
@@ -6228,7 +6224,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
if (ret) {
PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan filter",
on ? "enable" : "disable");
- goto DONE;
+ return ret;
}
i++;
}
@@ -6240,13 +6236,11 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
if (ret) {
PMD_DRV_LOG(ERR, "Update VSI failed to %s vlan filter",
on ? "enable" : "disable");
- goto DONE;
+ return ret;
}
}
-DONE:
- rte_free(mac_filter);
- return ret;
+ return 0;
}
/* Configure vlan stripping on or off */
@@ -7128,19 +7122,20 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
uint16_t flags;
int ret = I40E_SUCCESS;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- struct i40e_aqc_add_macvlan_element_data *req_list;
+ uint8_t aq_buff[I40E_AQ_BUF_SZ] = {0};
+ struct i40e_aqc_add_macvlan_element_data *req_list =
+ (struct i40e_aqc_add_macvlan_element_data *)aq_buff;
+
+ if (hw->aq.asq_buf_size > I40E_AQ_BUF_SZ) {
+ PMD_DRV_LOG(ERR, "AdminQ size biffer than max");
+ return I40E_ERR_NO_MEMORY;
+ }
if (filter == NULL || total == 0)
return I40E_ERR_PARAM;
ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
ele_buff_size = hw->aq.asq_buf_size;
- req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0);
- if (req_list == NULL) {
- PMD_DRV_LOG(ERR, "Fail to allocate memory");
- return I40E_ERR_NO_MEMORY;
- }
-
num = 0;
do {
actual_num = (num + ele_num > total) ? (total - num) : ele_num;
@@ -7169,8 +7164,7 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
break;
default:
PMD_DRV_LOG(ERR, "Invalid MAC match type");
- ret = I40E_ERR_PARAM;
- goto DONE;
+ return I40E_ERR_PARAM;
}
req_list[i].queue_number = 0;
@@ -7182,14 +7176,11 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
actual_num, NULL);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to add macvlan filter");
- goto DONE;
+ return ret;
}
num += actual_num;
} while (num < total);
-
-DONE:
- rte_free(req_list);
- return ret;
+ return I40E_SUCCESS;
}
int
@@ -7202,21 +7193,22 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
uint16_t flags;
int ret = I40E_SUCCESS;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- struct i40e_aqc_remove_macvlan_element_data *req_list;
+ uint8_t aq_buff[I40E_AQ_BUF_SZ] = {0};
+ struct i40e_aqc_remove_macvlan_element_data *req_list =
+ (struct i40e_aqc_remove_macvlan_element_data *)aq_buff;
enum i40e_admin_queue_err aq_status;
if (filter == NULL || total == 0)
return I40E_ERR_PARAM;
- ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
- ele_buff_size = hw->aq.asq_buf_size;
-
- req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0);
- if (req_list == NULL) {
- PMD_DRV_LOG(ERR, "Fail to allocate memory");
+ if (hw->aq.asq_buf_size > I40E_AQ_BUF_SZ) {
+ PMD_DRV_LOG(ERR, "AdminQ size biffer than max");
return I40E_ERR_NO_MEMORY;
}
+ ele_num = hw->aq.asq_buf_size / sizeof(*req_list);
+ ele_buff_size = hw->aq.asq_buf_size;
+
num = 0;
do {
actual_num = (num + ele_num > total) ? (total - num) : ele_num;
@@ -7245,8 +7237,7 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
break;
default:
PMD_DRV_LOG(ERR, "Invalid MAC filter type");
- ret = I40E_ERR_PARAM;
- goto DONE;
+ return I40E_ERR_PARAM;
}
req_list[i].flags = rte_cpu_to_le_16(flags);
}
@@ -7260,15 +7251,13 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
ret = I40E_SUCCESS;
} else {
PMD_DRV_LOG(ERR, "Failed to remove macvlan filter");
- goto DONE;
+ return ret;
}
}
num += actual_num;
} while (num < total);
-DONE:
- rte_free(req_list);
- return ret;
+ return I40E_SUCCESS;
}
/* Find out specific MAC filter */
@@ -7436,7 +7425,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
else
num = vsi->mac_num * vsi->vlan_num;
- mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0);
+ mv_f = calloc(num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7465,7 +7454,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
ret = i40e_remove_macvlan_filters(vsi, mv_f, num);
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7473,7 +7462,7 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
int
i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
{
- struct i40e_macvlan_filter *mv_f;
+ struct i40e_macvlan_filter mv_f[I40E_NUM_MACADDR_MAX] = {0};
int mac_num;
int ret = I40E_SUCCESS;
@@ -7490,37 +7479,31 @@ i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan)
PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr");
return I40E_ERR_PARAM;
}
-
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
-
- if (mv_f == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (mac_num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Error! Too many MAC addresses");
+ return I40E_ERR_PARAM;
}
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
i40e_set_vlan_filter(vsi, vlan, 1);
vsi->vlan_num++;
- ret = I40E_SUCCESS;
-DONE:
- rte_free(mv_f);
- return ret;
+ return I40E_SUCCESS;
}
int
i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
{
- struct i40e_macvlan_filter *mv_f;
+ struct i40e_macvlan_filter mv_f[I40E_NUM_MACADDR_MAX] = {0};
int mac_num;
int ret = I40E_SUCCESS;
@@ -7541,42 +7524,36 @@ i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan)
PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr");
return I40E_ERR_PARAM;
}
-
- mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0);
-
- if (mv_f == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return I40E_ERR_NO_MEMORY;
+ if (mac_num > I40E_NUM_MACADDR_MAX) {
+ PMD_DRV_LOG(ERR, "Error! Too many MAC addresses");
+ return I40E_ERR_PARAM;
}
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_remove_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
/* This is last vlan to remove, replace all mac filter with vlan 0 */
if (vsi->vlan_num == 1) {
ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, 0);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num);
if (ret != I40E_SUCCESS)
- goto DONE;
+ return ret;
}
i40e_set_vlan_filter(vsi, vlan, 0);
vsi->vlan_num--;
- ret = I40E_SUCCESS;
-DONE:
- rte_free(mv_f);
- return ret;
+ return I40E_SUCCESS;
}
int
@@ -7607,7 +7584,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
mac_filter->filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7646,7 +7623,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
@@ -7677,7 +7654,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (mv_f == NULL) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -7706,7 +7683,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
ret = I40E_SUCCESS;
DONE:
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index a358f68bc5..4839a1d9bf 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -2,6 +2,7 @@
* Copyright(c) 2010-2017 Intel Corporation
*/
+#include <stdlib.h>
#include <eal_export.h>
#include <rte_string_fns.h>
#include <rte_malloc.h>
@@ -233,7 +234,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -250,18 +251,18 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
@@ -294,7 +295,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
f->mac_info.filter_type == I40E_MAC_HASH_MATCH)
vlan_num = 1;
- mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0);
+ mv_f = calloc(vlan_num, sizeof(*mv_f));
if (!mv_f) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
return I40E_ERR_NO_MEMORY;
@@ -312,18 +313,18 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num,
&f->mac_info.mac_addr);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
}
ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num);
if (ret != I40E_SUCCESS) {
- rte_free(mv_f);
+ free(mv_f);
return ret;
}
- rte_free(mv_f);
+ free(mv_f);
ret = I40E_SUCCESS;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 12/26] net/i40e: avoid rte malloc in VF resource queries
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (10 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 13/26] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
` (14 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when responding to VF resource queries, we are dynamically
allocating a temporary buffer with rte_zmalloc followed by an immediate
rte_free. This memory does not need to be stored in hugepage memory and
the allocation size is pretty small, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/i40e_pf.c | 26 +++++++-------------------
1 file changed, 7 insertions(+), 19 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_pf.c b/drivers/net/intel/i40e/i40e_pf.c
index ebe1deeade..08cdd6bc4d 100644
--- a/drivers/net/intel/i40e/i40e_pf.c
+++ b/drivers/net/intel/i40e/i40e_pf.c
@@ -309,9 +309,14 @@ static int
i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
bool b_op)
{
- struct virtchnl_vf_resource *vf_res = NULL;
+ /* only have 1 VSI by default */
+ struct {
+ struct virtchnl_vf_resource vf_res;
+ struct virtchnl_vsi_resource vsi_res;
+ } res = {0};
+ struct virtchnl_vf_resource *vf_res = &res.vf_res;
struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf);
- uint32_t len = 0;
+ uint32_t len = sizeof(res);
uint64_t default_hena = I40E_RSS_HENA_ALL;
int ret = I40E_SUCCESS;
@@ -322,20 +327,6 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
return ret;
}
- /* only have 1 VSI by default */
- len = sizeof(struct virtchnl_vf_resource) +
- I40E_DEFAULT_VF_VSI_NUM *
- sizeof(struct virtchnl_vsi_resource);
-
- vf_res = rte_zmalloc("i40e_vf_res", len, 0);
- if (vf_res == NULL) {
- PMD_DRV_LOG(ERR, "failed to allocate mem");
- ret = I40E_ERR_NO_MEMORY;
- vf_res = NULL;
- len = 0;
- goto send_msg;
- }
-
if (VF_IS_V10(&vf->version)) /* doesn't support offload negotiate */
vf->request_caps = VIRTCHNL_VF_OFFLOAD_L2 |
VIRTCHNL_VF_OFFLOAD_VLAN;
@@ -377,11 +368,8 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg,
rte_ether_addr_copy(&vf->mac_addr,
(struct rte_ether_addr *)vf_res->vsi_res[0].default_mac_addr);
-send_msg:
i40e_pf_host_send_msg_to_vf(vf, VIRTCHNL_OP_GET_VF_RESOURCES,
ret, (uint8_t *)vf_res, len);
- rte_free(vf_res);
-
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 13/26] net/i40e: avoid rte malloc in adminq operations
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (11 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 12/26] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 14/26] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
` (13 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing admin queue messages, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index 672d337d99..cd648285d1 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -6870,14 +6870,11 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_arq_event_info info;
uint16_t pending, opcode;
+ uint8_t msg_buf[I40E_AQ_BUF_SZ] = {0};
int ret;
- info.buf_len = I40E_AQ_BUF_SZ;
- info.msg_buf = rte_zmalloc("msg_buffer", info.buf_len, 0);
- if (!info.msg_buf) {
- PMD_DRV_LOG(ERR, "Failed to allocate mem");
- return;
- }
+ info.buf_len = sizeof(msg_buf);
+ info.msg_buf = msg_buf;
pending = 1;
while (pending) {
@@ -6913,7 +6910,6 @@ i40e_dev_handle_aq_msg(struct rte_eth_dev *dev)
break;
}
}
- rte_free(info.msg_buf);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 14/26] net/i40e: avoid rte malloc in DDP package handling
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (12 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 13/26] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 15/26] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
` (12 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when processing Dynamic Driver Profile (DDP) packages and
checking profile information, we are using rte_zmalloc followed by
immediate rte_free. This memory does not need to be stored in hugepage
memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/i40e/rte_pmd_i40e.c | 43 +++++----------------------
1 file changed, 8 insertions(+), 35 deletions(-)
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index 4839a1d9bf..7892fa8a4e 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -1557,7 +1557,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
{
struct rte_eth_dev *dev = &rte_eth_devices[port];
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- uint8_t *buff;
+ uint8_t buff[(I40E_MAX_PROFILE_NUM + 4) * I40E_PROFILE_INFO_SIZE] = {0};
struct rte_pmd_i40e_profile_list *p_list;
struct rte_pmd_i40e_profile_info *pinfo, *p;
uint32_t i;
@@ -1570,13 +1570,6 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
PMD_DRV_LOG(INFO, "Read-only profile.");
return 0;
}
- buff = rte_zmalloc("pinfo_list",
- (I40E_PROFILE_INFO_SIZE * I40E_MAX_PROFILE_NUM + 4),
- 0);
- if (!buff) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return -1;
- }
ret = i40e_aq_get_ddp_list(
hw, (void *)buff,
@@ -1584,7 +1577,6 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
0, NULL);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get profile info list.");
- rte_free(buff);
return -1;
}
p_list = (struct rte_pmd_i40e_profile_list *)buff;
@@ -1592,20 +1584,17 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
p = &p_list->p_info[i];
if (pinfo->track_id == p->track_id) {
PMD_DRV_LOG(INFO, "Profile exists.");
- rte_free(buff);
return 1;
}
}
/* profile with group id 0xff is compatible with any other profile */
if ((pinfo->track_id & group_mask) == group_mask) {
- rte_free(buff);
return 0;
}
for (i = 0; i < p_list->p_count; i++) {
p = &p_list->p_info[i];
if ((p->track_id & group_mask) == 0) {
PMD_DRV_LOG(INFO, "Profile of the group 0 exists.");
- rte_free(buff);
return 2;
}
}
@@ -1616,12 +1605,9 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
if ((pinfo->track_id & group_mask) !=
(p->track_id & group_mask)) {
PMD_DRV_LOG(INFO, "Profile of different group exists.");
- rte_free(buff);
return 3;
}
}
-
- rte_free(buff);
return 0;
}
@@ -1637,7 +1623,10 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
struct i40e_generic_seg_header *profile_seg_hdr;
struct i40e_generic_seg_header *metadata_seg_hdr;
uint32_t track_id;
- uint8_t *profile_info_sec;
+ struct {
+ struct i40e_profile_section_header sec;
+ struct i40e_profile_info info;
+ } profile_info_sec = {0};
int is_exist;
enum i40e_status_code status = I40E_SUCCESS;
static const uint32_t type_mask = 0xff000000;
@@ -1702,26 +1691,15 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
return -EINVAL;
}
- profile_info_sec = rte_zmalloc(
- "i40e_profile_info",
- sizeof(struct i40e_profile_section_header) +
- sizeof(struct i40e_profile_info),
- 0);
- if (!profile_info_sec) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- return -EINVAL;
- }
-
/* Check if the profile already loaded */
i40e_generate_profile_info_sec(
((struct i40e_profile_segment *)profile_seg_hdr)->name,
&((struct i40e_profile_segment *)profile_seg_hdr)->version,
- track_id, profile_info_sec,
+ track_id, (uint8_t* )&profile_info_sec,
op == RTE_PMD_I40E_PKG_OP_WR_ADD);
- is_exist = i40e_check_profile_info(port, profile_info_sec);
+ is_exist = i40e_check_profile_info(port, (uint8_t* )&profile_info_sec);
if (is_exist < 0) {
PMD_DRV_LOG(ERR, "Failed to check profile.");
- rte_free(profile_info_sec);
return -EINVAL;
}
@@ -1734,13 +1712,11 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
else if (is_exist == 3)
PMD_DRV_LOG(ERR, "Profile of different group already exists");
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return -EEXIST;
}
} else if (op == RTE_PMD_I40E_PKG_OP_WR_DEL) {
if (is_exist != 1) {
PMD_DRV_LOG(ERR, "Profile does not exist.");
- rte_free(profile_info_sec);
return -EACCES;
}
}
@@ -1752,7 +1728,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
track_id);
if (status) {
PMD_DRV_LOG(ERR, "Failed to write profile for delete.");
- rte_free(profile_info_sec);
return status;
}
} else {
@@ -1765,14 +1740,13 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
PMD_DRV_LOG(ERR, "Failed to write profile for add.");
else
PMD_DRV_LOG(ERR, "Failed to write profile.");
- rte_free(profile_info_sec);
return status;
}
}
if (track_id && (op != RTE_PMD_I40E_PKG_OP_WR_ONLY)) {
/* Modify loaded profiles info list */
- status = i40e_add_rm_profile_info(hw, profile_info_sec);
+ status = i40e_add_rm_profile_info(hw, (uint8_t* )&profile_info_sec);
if (status) {
if (op == RTE_PMD_I40E_PKG_OP_WR_ADD)
PMD_DRV_LOG(ERR, "Failed to add profile to info list.");
@@ -1785,7 +1759,6 @@ rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
op == RTE_PMD_I40E_PKG_OP_WR_DEL)
i40e_update_customized_info(dev, buff, size, op);
- rte_free(profile_info_sec);
return status;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 15/26] net/i40e: avoid rte malloc in DDP ptype handling
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (13 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 14/26] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 16/26] net/iavf: remove remnants of pipeline mode Anatoly Burakov
` (11 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating customized protocol and packet type information
via DDP packages, we are using rte_zmalloc followed by immediate
rte_free. This memory does not need to be stored in hugepage memory, so
replace it with stack allocation or regular calloc/free as appropriate.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 43 ++++++++--------------------
1 file changed, 12 insertions(+), 31 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index cd648285d1..af736f59be 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -11716,8 +11716,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint32_t pctype_num;
- struct rte_pmd_i40e_ptype_info *pctype;
- uint32_t buff_size;
+ struct rte_pmd_i40e_ptype_info pctype[I40E_CUSTOMIZED_MAX] = {0};
struct i40e_customized_pctype *new_pctype = NULL;
uint8_t proto_id;
uint8_t pctype_value;
@@ -11743,19 +11742,16 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
return -1;
}
- buff_size = pctype_num * sizeof(struct rte_pmd_i40e_proto_info);
- pctype = rte_zmalloc("new_pctype", buff_size, 0);
- if (!pctype) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
+ if (pctype_num > RTE_DIM(pctype)) {
+ PMD_DRV_LOG(ERR, "Pctype number exceeds maximum supported");
return -1;
}
/* get information about new pctype list */
ret = rte_pmd_i40e_get_ddp_info(pkg, pkg_size,
- (uint8_t *)pctype, buff_size,
+ (uint8_t *)pctype, sizeof(pctype),
RTE_PMD_I40E_PKG_INFO_PCTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get pctype list");
- rte_free(pctype);
return -1;
}
@@ -11836,7 +11832,6 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
}
- rte_free(pctype);
return 0;
}
@@ -11846,11 +11841,10 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
struct rte_pmd_i40e_proto_info *proto,
enum rte_pmd_i40e_package_op op)
{
- struct rte_pmd_i40e_ptype_mapping *ptype_mapping;
+ struct rte_pmd_i40e_ptype_mapping ptype_mapping[I40E_MAX_PKT_TYPE] = {0};
uint16_t port_id = dev->data->port_id;
uint32_t ptype_num;
- struct rte_pmd_i40e_ptype_info *ptype;
- uint32_t buff_size;
+ struct rte_pmd_i40e_ptype_info ptype[I40E_MAX_PKT_TYPE] = {0};
uint8_t proto_id;
char name[RTE_PMD_I40E_DDP_NAME_SIZE];
uint32_t i, j, n;
@@ -11881,31 +11875,20 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
return -1;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_info);
- ptype = rte_zmalloc("new_ptype", buff_size, 0);
- if (!ptype) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
+ if (ptype_num > RTE_DIM(ptype)) {
+ PMD_DRV_LOG(ERR, "Too many ptypes");
return -1;
}
/* get information about new ptype list */
ret = rte_pmd_i40e_get_ddp_info(pkg, pkg_size,
- (uint8_t *)ptype, buff_size,
+ (uint8_t *)ptype, sizeof(ptype),
RTE_PMD_I40E_PKG_INFO_PTYPE_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get ptype list");
- rte_free(ptype);
return ret;
}
- buff_size = ptype_num * sizeof(struct rte_pmd_i40e_ptype_mapping);
- ptype_mapping = rte_zmalloc("ptype_mapping", buff_size, 0);
- if (!ptype_mapping) {
- PMD_DRV_LOG(ERR, "Failed to allocate memory");
- rte_free(ptype);
- return -1;
- }
-
/* Update ptype mapping table. */
for (i = 0; i < ptype_num; i++) {
ptype_mapping[i].hw_ptype = ptype[i].ptype_id;
@@ -12040,8 +12023,6 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(ERR, "Failed to update ptype mapping table.");
- rte_free(ptype_mapping);
- rte_free(ptype);
return ret;
}
@@ -12076,7 +12057,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
}
buff_size = proto_num * sizeof(struct rte_pmd_i40e_proto_info);
- proto = rte_zmalloc("new_proto", buff_size, 0);
+ proto = calloc(proto_num, sizeof(struct rte_pmd_i40e_proto_info));
if (!proto) {
PMD_DRV_LOG(ERR, "Failed to allocate memory");
return;
@@ -12088,7 +12069,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
RTE_PMD_I40E_PKG_INFO_PROTOCOL_LIST);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to get protocol list");
- rte_free(proto);
+ free(proto);
return;
}
@@ -12126,7 +12107,7 @@ i40e_update_customized_info(struct rte_eth_dev *dev, uint8_t *pkg,
if (ret)
PMD_DRV_LOG(INFO, "No ptype is updated.");
- rte_free(proto);
+ free(proto);
}
/* Create a QinQ cloud filter
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 16/26] net/iavf: remove remnants of pipeline mode
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (14 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 15/26] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 17/26] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
` (10 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
When pipeline mode was removed, some of the things used by pipelines were
left in the code. Remove them as they are unused.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_fdir.c | 1 -
drivers/net/intel/iavf/iavf_fsub.c | 1 -
drivers/net/intel/iavf/iavf_generic_flow.h | 15 ---------------
drivers/net/intel/iavf/iavf_hash.c | 1 -
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 1 -
5 files changed, 19 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c
index 0ef6e0d04a..9eae874800 100644
--- a/drivers/net/intel/iavf/iavf_fdir.c
+++ b/drivers/net/intel/iavf/iavf_fdir.c
@@ -1632,7 +1632,6 @@ static struct iavf_flow_parser iavf_fdir_parser = {
.array = iavf_fdir_pattern,
.array_len = RTE_DIM(iavf_fdir_pattern),
.parse_pattern_action = iavf_fdir_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fdir_engine_register)
diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c
index cf1030320f..bfb34695de 100644
--- a/drivers/net/intel/iavf/iavf_fsub.c
+++ b/drivers/net/intel/iavf/iavf_fsub.c
@@ -814,7 +814,6 @@ iavf_flow_parser iavf_fsub_parser = {
.array = iavf_fsub_pattern_list,
.array_len = RTE_DIM(iavf_fsub_pattern_list),
.parse_pattern_action = iavf_fsub_parse,
- .stage = IAVF_FLOW_STAGE_DISTRIBUTOR,
};
RTE_INIT(iavf_fsub_engine_init)
diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h
index b11bb4cf2b..b97cf8b7ff 100644
--- a/drivers/net/intel/iavf/iavf_generic_flow.h
+++ b/drivers/net/intel/iavf/iavf_generic_flow.h
@@ -485,20 +485,6 @@ enum iavf_flow_engine_type {
IAVF_FLOW_ENGINE_MAX,
};
-/**
- * classification stages.
- * for non-pipeline mode, we have two classification stages: Distributor/RSS
- * for pipeline-mode we have three classification stages:
- * Permission/Distributor/RSS
- */
-enum iavf_flow_classification_stage {
- IAVF_FLOW_STAGE_NONE = 0,
- IAVF_FLOW_STAGE_IPSEC_CRYPTO,
- IAVF_FLOW_STAGE_RSS,
- IAVF_FLOW_STAGE_DISTRIBUTOR,
- IAVF_FLOW_STAGE_MAX,
-};
-
/* Struct to store engine created. */
struct iavf_flow_engine {
TAILQ_ENTRY(iavf_flow_engine) node;
@@ -527,7 +513,6 @@ struct iavf_flow_parser {
struct iavf_pattern_match_item *array;
uint32_t array_len;
parse_pattern_action_t parse_pattern_action;
- enum iavf_flow_classification_stage stage;
};
/* Struct to store parser created. */
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index 1725c2b2b9..a40fed7542 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -691,7 +691,6 @@ static struct iavf_flow_parser iavf_hash_parser = {
.array = iavf_hash_pattern_list,
.array_len = RTE_DIM(iavf_hash_pattern_list),
.parse_pattern_action = iavf_hash_parse_pattern_action,
- .stage = IAVF_FLOW_STAGE_RSS,
};
int
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index ab41b1973e..82323b9aa9 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -1983,7 +1983,6 @@ static struct iavf_flow_parser iavf_ipsec_flow_parser = {
.array = iavf_ipsec_flow_pattern,
.array_len = RTE_DIM(iavf_ipsec_flow_pattern),
.parse_pattern_action = iavf_ipsec_flow_parse,
- .stage = IAVF_FLOW_STAGE_IPSEC_CRYPTO,
};
RTE_INIT(iavf_ipsec_flow_engine_register)
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 17/26] net/iavf: decouple hash uninit from parser uninit
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (15 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 16/26] net/iavf: remove remnants of pipeline mode Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 18/26] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
` (9 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, parser deinitialization will trigger removal of current RSS
configuration. This should not be done as part of parser deinitialization,
but should rather be a separate step in dev close flow.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/iavf/iavf.h | 1 +
drivers/net/intel/iavf/iavf_ethdev.c | 3 +++
drivers/net/intel/iavf/iavf_hash.c | 12 ++++++++----
3 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 39949acc11..6054321771 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -566,4 +566,5 @@ void iavf_dev_watchdog_disable(struct iavf_adapter *adapter);
void iavf_handle_hw_reset(struct rte_eth_dev *dev, bool vf_initiated_reset);
void iavf_set_no_poll(struct iavf_adapter *adapter, bool link_change);
bool is_iavf_supported(struct rte_eth_dev *dev);
+void iavf_hash_uninit(struct iavf_adapter *ad);
#endif /* _IAVF_ETHDEV_H_ */
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 954bce723d..b45da4d8b1 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -2977,6 +2977,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
/* free iAVF security device context all related resources */
iavf_security_ctx_destroy(adapter);
+ /* remove RSS configuration */
+ iavf_hash_uninit(adapter);
+
iavf_flow_flush(dev, NULL);
iavf_flow_uninit(adapter);
diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c
index a40fed7542..cb10eeab78 100644
--- a/drivers/net/intel/iavf/iavf_hash.c
+++ b/drivers/net/intel/iavf/iavf_hash.c
@@ -77,7 +77,7 @@ static int
iavf_hash_destroy(struct iavf_adapter *ad, struct rte_flow *flow,
struct rte_flow_error *error);
static void
-iavf_hash_uninit(struct iavf_adapter *ad);
+iavf_hash_uninit_parser(struct iavf_adapter *ad);
static void
iavf_hash_free(struct rte_flow *flow);
static int
@@ -680,7 +680,7 @@ static struct iavf_flow_engine iavf_hash_engine = {
.init = iavf_hash_init,
.create = iavf_hash_create,
.destroy = iavf_hash_destroy,
- .uninit = iavf_hash_uninit,
+ .uninit = iavf_hash_uninit_parser,
.free = iavf_hash_free,
.type = IAVF_FLOW_ENGINE_HASH,
};
@@ -1641,6 +1641,12 @@ iavf_hash_destroy(__rte_unused struct iavf_adapter *ad,
}
static void
+iavf_hash_uninit_parser(struct iavf_adapter *ad)
+{
+ iavf_unregister_parser(&iavf_hash_parser, ad);
+}
+
+void
iavf_hash_uninit(struct iavf_adapter *ad)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);
@@ -1658,8 +1664,6 @@ iavf_hash_uninit(struct iavf_adapter *ad)
rss_conf = &ad->dev_data->dev_conf.rx_adv_conf.rss_conf;
if (iavf_rss_hash_set(ad, rss_conf->rss_hf, false))
PMD_DRV_LOG(ERR, "fail to delete default RSS");
-
- iavf_unregister_parser(&iavf_hash_parser, ad);
}
static void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 18/26] net/iavf: avoid rte malloc in VF mailbox for IPsec
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (16 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 17/26] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 19/26] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
` (8 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when calling down into the VF mailbox, IPsec code will use
rte_malloc to allocate VF message structures. This memory does not need
to be stored in hugepage memory and the allocation size is pretty small,
so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 60 ++++++++++------------
1 file changed, 28 insertions(+), 32 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
index 82323b9aa9..fe540e76cb 100644
--- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c
@@ -467,7 +467,7 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sa_cfg);
- request = rte_malloc("iavf-sad-add-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -475,7 +475,7 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sa_cfg_resp);
- response = rte_malloc("iavf-sad-add-response", response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -553,8 +553,8 @@ iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,
else
rc = response->ipsec_data.sa_cfg_resp->sa_handle;
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
@@ -728,8 +728,7 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sp_cfg);
- request = rte_malloc("iavf-inbound-security-policy-add-request",
- request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -770,8 +769,7 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sp_cfg_resp);
- response = rte_malloc("iavf-inbound-security-policy-add-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -792,8 +790,8 @@ iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,
rc = response->ipsec_data.sp_cfg_resp->rule_id;
update_cleanup:
- rte_free(request);
- rte_free(response);
+ free(request);
+ free(response);
return rc;
}
@@ -808,7 +806,7 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sa_update);
- request = rte_malloc("iavf-sa-update-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -816,7 +814,7 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-update-response", response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -846,8 +844,8 @@ iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,
rc = response->ipsec_data.ipsec_resp->resp;
update_cleanup:
- rte_free(request);
- rte_free(response);
+ free(request);
+ free(response);
return rc;
}
@@ -905,7 +903,7 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sp_destroy);
- request = rte_malloc("iavf-sp-del-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -913,7 +911,7 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sp-del-response", response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -944,8 +942,8 @@ iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,
return response->ipsec_data.ipsec_status->status;
update_cleanup:
- rte_free(request);
- rte_free(response);
+ free(request);
+ free(response);
return rc;
}
@@ -962,7 +960,7 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_sa_destroy);
- request = rte_malloc("iavf-sa-del-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -971,7 +969,7 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_resp);
- response = rte_malloc("iavf-sa-del-response", response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1013,8 +1011,8 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,
rc = -EFAULT;
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
@@ -1168,7 +1166,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-capability-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1176,8 +1174,7 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_cap);
- response = rte_malloc("iavf-device-capability-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1203,8 +1200,8 @@ iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,
memcpy(capability, response->ipsec_data.ipsec_cap, sizeof(*capability));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
@@ -1593,7 +1590,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
request_len = sizeof(struct inline_ipsec_msg);
- request = rte_malloc("iavf-device-status-request", request_len, 0);
+ request = calloc(1, request_len);
if (request == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1601,8 +1598,7 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
response_len = sizeof(struct inline_ipsec_msg) +
sizeof(struct virtchnl_ipsec_status);
- response = rte_malloc("iavf-device-status-response",
- response_len, 0);
+ response = calloc(1, response_len);
if (response == NULL) {
rc = -ENOMEM;
goto update_cleanup;
@@ -1628,8 +1624,8 @@ iavf_ipsec_crypto_status_get(struct iavf_adapter *adapter,
memcpy(status, response->ipsec_data.ipsec_status, sizeof(*status));
update_cleanup:
- rte_free(response);
- rte_free(request);
+ free(response);
+ free(request);
return rc;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 19/26] net/iavf: avoid rte malloc in RSS configuration
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (17 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 18/26] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 20/26] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
` (7 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring RSS (redirection table, lookup table, and
hash key), we are using rte_zmalloc followed by an immediate rte_free.
This memory does not need to be stored in hugepage memory, and in context
of IAVF we do not define how big these structures can be, so replace the
allocations with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_ethdev.c | 4 ++--
drivers/net/intel/iavf/iavf_vchnl.c | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index b45da4d8b1..4e0df2ca05 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1553,7 +1553,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1573,7 +1573,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = iavf_configure_rss_lut(adapter);
if (ret) /* revert back */
rte_memcpy(vf->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 9ad39300c6..55986ef909 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1159,7 +1159,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;
- rss_lut = rte_zmalloc("rss_lut", len, 0);
+ rss_lut = calloc(1, len);
if (!rss_lut)
return -ENOMEM;
@@ -1178,7 +1178,7 @@ iavf_configure_rss_lut(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_LUT");
- rte_free(rss_lut);
+ free(rss_lut);
return err;
}
@@ -1191,7 +1191,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
int len, err = 0;
len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1;
- rss_key = rte_zmalloc("rss_key", len, 0);
+ rss_key = calloc(1, len);
if (!rss_key)
return -ENOMEM;
@@ -1210,7 +1210,7 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_CONFIG_RSS_KEY");
- rte_free(rss_key);
+ free(rss_key);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 20/26] net/iavf: avoid rte malloc in MAC address operations
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (18 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 19/26] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 21/26] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
` (6 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This memory does not need
to be stored in hugepage memory, so replace it with stack allocation.
The original code also had a loop attempting to split up sending MAC
address list into multiple virtchnl messages. However, maximum number of
MAC addresses is 64, sizeof() each MAC address is 8 bytes, and maximum
virtchnl message size is 4K, so splitting it up is actually unnecessary.
This loop has been removed.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf_vchnl.c | 88 ++++++++++++-----------------
1 file changed, 35 insertions(+), 53 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index 55986ef909..e27d68fd9f 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1380,63 +1380,45 @@ iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
void
iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
{
- struct virtchnl_ether_addr_list *list;
+ struct {
+ struct virtchnl_ether_addr_list list;
+ struct virtchnl_ether_addr addr[IAVF_NUM_MACADDR_MAX];
+ } list_req = {0};
+ struct virtchnl_ether_addr_list *list = &list_req.list;
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct rte_ether_addr *addr;
- struct iavf_cmd_info args;
- int len, err, i, j;
- int next_begin = 0;
- int begin = 0;
+ struct iavf_cmd_info args = {0};
+ int err, i;
+ size_t buf_len;
- do {
- j = 0;
- len = sizeof(struct virtchnl_ether_addr_list);
- for (i = begin; i < IAVF_NUM_MACADDR_MAX; i++, next_begin++) {
- addr = &adapter->dev_data->mac_addrs[i];
- if (rte_is_zero_ether_addr(addr))
- continue;
- len += sizeof(struct virtchnl_ether_addr);
- if (len >= IAVF_AQ_BUF_SZ) {
- next_begin = i + 1;
- break;
- }
- }
+ for (i = 0; i < IAVF_NUM_MACADDR_MAX; i++) {
+ struct rte_ether_addr *addr = &adapter->dev_data->mac_addrs[i];
+ struct virtchnl_ether_addr *vc_addr = &list->list[list->num_elements];
- list = rte_zmalloc("iavf_del_mac_buffer", len, 0);
- if (!list) {
- PMD_DRV_LOG(ERR, "fail to allocate memory");
- return;
- }
+ /* ignore empty addresses */
+ if (rte_is_zero_ether_addr(addr))
+ continue;
+ list->num_elements++;
- for (i = begin; i < next_begin; i++) {
- addr = &adapter->dev_data->mac_addrs[i];
- if (rte_is_zero_ether_addr(addr))
- continue;
- rte_memcpy(list->list[j].addr, addr->addr_bytes,
- sizeof(addr->addr_bytes));
- list->list[j].type = (j == 0 ?
- VIRTCHNL_ETHER_ADDR_PRIMARY :
- VIRTCHNL_ETHER_ADDR_EXTRA);
- PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
- RTE_ETHER_ADDR_BYTES(addr));
- j++;
- }
- list->vsi_id = vf->vsi_res->vsi_id;
- list->num_elements = j;
- args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
- VIRTCHNL_OP_DEL_ETH_ADDR;
- args.in_args = (uint8_t *)list;
- args.in_args_size = len;
- args.out_buffer = vf->aq_resp;
- args.out_size = IAVF_AQ_BUF_SZ;
- err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
- if (err)
- PMD_DRV_LOG(ERR, "fail to execute command %s",
- add ? "OP_ADD_ETHER_ADDRESS" :
- "OP_DEL_ETHER_ADDRESS");
- rte_free(list);
- begin = next_begin;
- } while (begin < IAVF_NUM_MACADDR_MAX);
+ memcpy(vc_addr->addr, addr->addr_bytes, sizeof(addr->addr_bytes));
+ vc_addr->type = (list->num_elements == 1) ?
+ VIRTCHNL_ETHER_ADDR_PRIMARY :
+ VIRTCHNL_ETHER_ADDR_EXTRA;
+ }
+
+ /* for some reason PF side checks for buffer being too big, so adjust it down */
+ buf_len = sizeof(struct virtchnl_ether_addr_list) +
+ sizeof(struct virtchnl_ether_addr) * list->num_elements;
+
+ list->vsi_id = vf->vsi_res->vsi_id;
+ args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
+ args.in_args = (uint8_t *)list;
+ args.in_args_size = buf_len;
+ args.out_buffer = vf->aq_resp;
+ args.out_size = IAVF_AQ_BUF_SZ;
+ err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
+ if (err)
+ PMD_DRV_LOG(ERR, "fail to execute command %s",
+ add ? "OP_ADD_ETHER_ADDRESS" : "OP_DEL_ETHER_ADDRESS");
}
int
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 21/26] net/iavf: avoid rte malloc in queue operations
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (19 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 20/26] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 22/26] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
` (5 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when enabling, disabling, or switching queues, we are using
rte_malloc followed by an immediate rte_free. This is not needed as these
structures are not being stored anywhere, so replace them with stack
allocation.
The original code did not check maximum queue number, because the design
was built around an anti-pattern of caller having to chunk the queue
configuration. This has now been corrected as well.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf.h | 3 +-
drivers/net/intel/iavf/iavf_ethdev.c | 15 +-
drivers/net/intel/iavf/iavf_vchnl.c | 215 +++++++++++++++------------
3 files changed, 119 insertions(+), 114 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 6054321771..77a2c94290 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -503,8 +503,7 @@ int iavf_disable_queues(struct iavf_adapter *adapter);
int iavf_disable_queues_lv(struct iavf_adapter *adapter);
int iavf_configure_rss_lut(struct iavf_adapter *adapter);
int iavf_configure_rss_key(struct iavf_adapter *adapter);
-int iavf_configure_queues(struct iavf_adapter *adapter,
- uint16_t num_queue_pairs, uint16_t index);
+int iavf_configure_queues(struct iavf_adapter *adapter, uint16_t num_queue_pairs);
int iavf_get_supported_rxdid(struct iavf_adapter *adapter);
int iavf_config_vlan_strip_v2(struct iavf_adapter *adapter, bool enable);
int iavf_config_vlan_insert_v2(struct iavf_adapter *adapter, bool enable);
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 4e0df2ca05..6e216f4c0f 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -1036,20 +1036,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
if (iavf_set_vf_quanta_size(adapter, index, num_queue_pairs) != 0)
PMD_DRV_LOG(WARNING, "configure quanta size failed");
- /* If needed, send configure queues msg multiple times to make the
- * adminq buffer length smaller than the 4K limitation.
- */
- while (num_queue_pairs > IAVF_CFG_Q_NUM_PER_BUF) {
- if (iavf_configure_queues(adapter,
- IAVF_CFG_Q_NUM_PER_BUF, index) != 0) {
- PMD_DRV_LOG(ERR, "configure queues failed");
- goto error;
- }
- num_queue_pairs -= IAVF_CFG_Q_NUM_PER_BUF;
- index += IAVF_CFG_Q_NUM_PER_BUF;
- }
-
- if (iavf_configure_queues(adapter, num_queue_pairs, index) != 0) {
+ if (iavf_configure_queues(adapter, num_queue_pairs) != 0) {
PMD_DRV_LOG(ERR, "configure queues failed");
goto error;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index e27d68fd9f..eb9e4d0c2d 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1020,19 +1020,15 @@ int
iavf_enable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1048,7 +1044,7 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_ENABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1056,7 +1052,6 @@ iavf_enable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_ENABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1064,19 +1059,15 @@ int
iavf_disable_queues_lv(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ struct virtchnl_queue_chunk chunks[IAVF_RXTX_QUEUE_CHUNKS_NUM - 1];
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues) +
- sizeof(struct virtchnl_queue_chunk) *
- (IAVF_RXTX_QUEUE_CHUNKS_NUM - 1);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = IAVF_RXTX_QUEUE_CHUNKS_NUM;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1092,7 +1083,7 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1100,7 +1091,6 @@ iavf_disable_queues_lv(struct iavf_adapter *adapter)
PMD_DRV_LOG(ERR,
"Failed to execute command of OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1109,17 +1099,14 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
bool rx, bool on)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_del_ena_dis_queues *queue_select;
- struct virtchnl_queue_chunk *queue_chunk;
+ struct {
+ struct virtchnl_del_ena_dis_queues msg;
+ } queue_req = {0};
+ struct virtchnl_del_ena_dis_queues *queue_select = &queue_req.msg;
+ struct virtchnl_queue_chunk *queue_chunk = queue_select->chunks.chunks;
struct iavf_cmd_info args;
- int err, len;
+ int err;
- len = sizeof(struct virtchnl_del_ena_dis_queues);
- queue_select = rte_zmalloc("queue_select", len, 0);
- if (!queue_select)
- return -ENOMEM;
-
- queue_chunk = queue_select->chunks.chunks;
queue_select->chunks.num_chunks = 1;
queue_select->vport_id = vf->vsi_res->vsi_id;
@@ -1138,7 +1125,7 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
else
args.ops = VIRTCHNL_OP_DISABLE_QUEUES_V2;
args.in_args = (u8 *)queue_select;
- args.in_args_size = len;
+ args.in_args_size = sizeof(queue_req);
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
@@ -1146,7 +1133,6 @@ iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid,
PMD_DRV_LOG(ERR, "Failed to execute command of %s",
on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2");
- rte_free(queue_select);
return err;
}
@@ -1214,88 +1200,121 @@ iavf_configure_rss_key(struct iavf_adapter *adapter)
return err;
}
-int
-iavf_configure_queues(struct iavf_adapter *adapter,
- uint16_t num_queue_pairs, uint16_t index)
+static void
+iavf_configure_queue_pair(struct iavf_adapter *adapter,
+ struct virtchnl_queue_pair_info *vc_qp,
+ uint16_t q_idx)
{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
struct ci_rx_queue **rxq = (struct ci_rx_queue **)adapter->dev_data->rx_queues;
struct ci_tx_queue **txq = (struct ci_tx_queue **)adapter->dev_data->tx_queues;
+
+ /* common parts */
+ vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
+ vc_qp->txq.queue_id = q_idx;
+
+ vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
+ vc_qp->rxq.queue_id = q_idx;
+ vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
+
+ /* is this txq active? */
+ if (q_idx < adapter->dev_data->nb_tx_queues) {
+ vc_qp->txq.ring_len = txq[q_idx]->nb_tx_desc;
+ vc_qp->txq.dma_ring_addr = txq[q_idx]->tx_ring_dma;
+ }
+
+ /* is this rxq active? */
+ if (q_idx >= adapter->dev_data->nb_rx_queues)
+ return;
+
+ vc_qp->rxq.ring_len = rxq[q_idx]->nb_rx_desc;
+ vc_qp->rxq.dma_ring_addr = rxq[q_idx]->rx_ring_phys_addr;
+ vc_qp->rxq.databuffer_size = rxq[q_idx]->rx_buf_len;
+ vc_qp->rxq.crc_disable = rxq[q_idx]->crc_len != 0 ? 1 : 0;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ if (vf->supported_rxdid & RTE_BIT64(rxq[q_idx]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[q_idx]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, q_idx);
+ } else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[q_idx]->rxdid, IAVF_RXDID_LEGACY_1, q_idx);
+ vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
+ }
+
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP &&
+ vf->ptp_caps & VIRTCHNL_1588_PTP_CAP_RX_TSTAMP &&
+ rxq[q_idx]->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ vc_qp->rxq.flags |= VIRTCHNL_PTP_RX_TSTAMP;
+ }
+}
+
+static int
+iavf_configure_queue_chunk(struct iavf_adapter *adapter,
+ uint16_t chunk_sz,
+ uint16_t chunk_start)
+{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_vsi_queue_config_info *vc_config;
- struct virtchnl_queue_pair_info *vc_qp;
- struct iavf_cmd_info args;
- uint16_t i, size;
+ struct {
+ struct virtchnl_vsi_queue_config_info config;
+ struct virtchnl_queue_pair_info qp[IAVF_CFG_Q_NUM_PER_BUF];
+ } queue_req = {0};
+ struct iavf_cmd_info args = {0};
+ struct virtchnl_vsi_queue_config_info *vc_config = &queue_req.config;
+ struct virtchnl_queue_pair_info *vc_qp = vc_config->qpair;
+ uint16_t chunk_end = chunk_start + chunk_sz;
+ uint16_t i;
+ size_t buf_len;
int err;
- size = sizeof(*vc_config) +
- sizeof(vc_config->qpair[0]) * num_queue_pairs;
- vc_config = rte_zmalloc("cfg_queue", size, 0);
- if (!vc_config)
- return -ENOMEM;
+ if (chunk_sz > IAVF_CFG_Q_NUM_PER_BUF)
+ return -EINVAL;
vc_config->vsi_id = vf->vsi_res->vsi_id;
- vc_config->num_queue_pairs = num_queue_pairs;
+ vc_config->num_queue_pairs = chunk_sz;
- for (i = index, vc_qp = vc_config->qpair;
- i < index + num_queue_pairs;
- i++, vc_qp++) {
- vc_qp->txq.vsi_id = vf->vsi_res->vsi_id;
- vc_qp->txq.queue_id = i;
+ for (i = chunk_start; i < chunk_end; i++, vc_qp++)
+ iavf_configure_queue_pair(adapter, vc_qp, i);
- /* Virtchnnl configure tx queues by pairs */
- if (i < adapter->dev_data->nb_tx_queues) {
- vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
- vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
- }
+ /* for some reason PF side checks for buffer being too big, so adjust it down */
+ buf_len = sizeof(struct virtchnl_vsi_queue_config_info) +
+ sizeof(struct virtchnl_queue_pair_info) * chunk_sz;
- vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id;
- vc_qp->rxq.queue_id = i;
- vc_qp->rxq.max_pkt_size = vf->max_pkt_len;
-
- if (i >= adapter->dev_data->nb_rx_queues)
- continue;
-
- /* Virtchnnl configure rx queues by pairs */
- vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
- vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
- vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
- vc_qp->rxq.crc_disable = rxq[i]->crc_len != 0 ? 1 : 0;
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
- if (vf->supported_rxdid & RTE_BIT64(rxq[i]->rxdid)) {
- vc_qp->rxq.rxdid = rxq[i]->rxdid;
- PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
- vc_qp->rxq.rxdid, i);
- } else {
- PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
- "request default RXDID[%d] in Queue[%d]",
- rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
- vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- }
-
- if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP &&
- vf->ptp_caps & VIRTCHNL_1588_PTP_CAP_RX_TSTAMP &&
- rxq[i]->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
- vc_qp->rxq.flags |= VIRTCHNL_PTP_RX_TSTAMP;
- }
- }
-
- memset(&args, 0, sizeof(args));
args.ops = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
args.in_args = (uint8_t *)vc_config;
- args.in_args_size = size;
+ args.in_args_size = buf_len;
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
if (err)
- PMD_DRV_LOG(ERR, "Failed to execute command of"
- " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
-
- rte_free(vc_config);
+ PMD_DRV_LOG(ERR, "Failed to execute command VIRTCHNL_OP_CONFIG_VSI_QUEUES");
return err;
}
+int
+iavf_configure_queues(struct iavf_adapter *adapter, uint16_t num_queue_pairs)
+{
+ uint16_t c;
+ int err;
+
+ /*
+ * we cannot configure all queues in one go because they won't fit into
+ * adminq buffer, so we're going to chunk them instead
+ */
+ for (c = 0; c < num_queue_pairs; c += IAVF_CFG_Q_NUM_PER_BUF) {
+ uint16_t chunk_sz = RTE_MIN(num_queue_pairs - c, IAVF_CFG_Q_NUM_PER_BUF);
+ err = iavf_configure_queue_chunk(adapter, chunk_sz, c);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to configure queues chunk [%u, %u)",
+ c, c + chunk_sz);
+ return err;
+ }
+ }
+ return 0;
+}
+
int
iavf_config_irq_map(struct iavf_adapter *adapter)
{
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 22/26] net/iavf: avoid rte malloc in irq map config
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (20 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 21/26] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 23/26] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
` (4 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Currently, when configuring IRQ maps, we are using rte_zmalloc followed
by an immediate rte_free. This memory does not need to be stored in
hugepage memory, so replace it with stack allocation.
The original code did not check maximum IRQ map size, because the design
was built around an anti-pattern of caller having to chunk the IRQ map
calls. This has now been corrected as well.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/iavf/iavf.h | 3 +-
drivers/net/intel/iavf/iavf_ethdev.c | 15 +---
drivers/net/intel/iavf/iavf_vchnl.c | 121 ++++++++++++++++++---------
3 files changed, 83 insertions(+), 56 deletions(-)
diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
index 77a2c94290..f9bb398a77 100644
--- a/drivers/net/intel/iavf/iavf.h
+++ b/drivers/net/intel/iavf/iavf.h
@@ -511,8 +511,7 @@ int iavf_add_del_vlan_v2(struct iavf_adapter *adapter, uint16_t vlanid,
bool add);
int iavf_get_vlan_offload_caps_v2(struct iavf_adapter *adapter);
int iavf_config_irq_map(struct iavf_adapter *adapter);
-int iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
- uint16_t index);
+int iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num);
void iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add);
int iavf_dev_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete);
diff --git a/drivers/net/intel/iavf/iavf_ethdev.c b/drivers/net/intel/iavf/iavf_ethdev.c
index 6e216f4c0f..26e7febecf 100644
--- a/drivers/net/intel/iavf/iavf_ethdev.c
+++ b/drivers/net/intel/iavf/iavf_ethdev.c
@@ -919,20 +919,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
goto config_irq_map_err;
}
} else {
- uint16_t num_qv_maps = dev->data->nb_rx_queues;
- uint16_t index = 0;
-
- while (num_qv_maps > IAVF_IRQ_MAP_NUM_PER_BUF) {
- if (iavf_config_irq_map_lv(adapter,
- IAVF_IRQ_MAP_NUM_PER_BUF, index)) {
- PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
- goto config_irq_map_err;
- }
- num_qv_maps -= IAVF_IRQ_MAP_NUM_PER_BUF;
- index += IAVF_IRQ_MAP_NUM_PER_BUF;
- }
-
- if (iavf_config_irq_map_lv(adapter, num_qv_maps, index)) {
+ if (iavf_config_irq_map_lv(adapter, dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed");
goto config_irq_map_err;
}
diff --git a/drivers/net/intel/iavf/iavf_vchnl.c b/drivers/net/intel/iavf/iavf_vchnl.c
index eb9e4d0c2d..08dd6f2d7f 100644
--- a/drivers/net/intel/iavf/iavf_vchnl.c
+++ b/drivers/net/intel/iavf/iavf_vchnl.c
@@ -1319,81 +1319,122 @@ int
iavf_config_irq_map(struct iavf_adapter *adapter)
{
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_irq_map_info *map_info;
- struct virtchnl_vector_map *vecmap;
- struct iavf_cmd_info args;
- int len, i, err;
+ struct {
+ struct virtchnl_irq_map_info map_info;
+ struct virtchnl_vector_map vecmap[IAVF_MAX_NUM_QUEUES_DFLT];
+ } map_req = {0};
+ struct virtchnl_irq_map_info *map_info = &map_req.map_info;
+ struct iavf_cmd_info args = {0};
+ int i, err, max_vmi = -1;
+ size_t buf_len;
- len = sizeof(struct virtchnl_irq_map_info) +
- sizeof(struct virtchnl_vector_map) * vf->nb_msix;
+ if (adapter->dev_data->nb_rx_queues > IAVF_MAX_NUM_QUEUES_DFLT) {
+ PMD_DRV_LOG(ERR, "number of queues (%u) exceeds the max supported (%u)",
+ adapter->dev_data->nb_rx_queues, IAVF_MAX_NUM_QUEUES_DFLT);
+ return -EINVAL;
+ }
- map_info = rte_zmalloc("map_info", len, 0);
- if (!map_info)
- return -ENOMEM;
-
- map_info->num_vectors = vf->nb_msix;
for (i = 0; i < adapter->dev_data->nb_rx_queues; i++) {
- vecmap =
- &map_info->vecmap[vf->qv_map[i].vector_id - vf->msix_base];
+ struct virtchnl_vector_map *vecmap;
+ /* always 0 for 1 MSIX, never bigger than rxq for multi MSIX */
+ uint16_t vmi = vf->qv_map[i].vector_id - vf->msix_base;
+
+ /* can't happen but avoid static analysis warnings */
+ if (vmi >= IAVF_MAX_NUM_QUEUES_DFLT) {
+ PMD_DRV_LOG(ERR, "vector id (%u) exceeds the max supported (%u)",
+ vf->qv_map[i].vector_id,
+ vf->msix_base + IAVF_MAX_NUM_QUEUES_DFLT - 1);
+ return -EINVAL;
+ }
+
+ vecmap = &map_info->vecmap[vmi];
vecmap->vsi_id = vf->vsi_res->vsi_id;
vecmap->rxitr_idx = IAVF_ITR_INDEX_DEFAULT;
vecmap->vector_id = vf->qv_map[i].vector_id;
vecmap->txq_map = 0;
vecmap->rxq_map |= 1 << vf->qv_map[i].queue_id;
+
+ /* MSIX vectors round robin so look for max */
+ if (vmi > max_vmi) {
+ map_info->num_vectors++;
+ max_vmi = vmi;
+ }
}
+ /* for some reason PF side checks for buffer being too big, so adjust it down */
+ buf_len = sizeof(struct virtchnl_irq_map_info) +
+ sizeof(struct virtchnl_vector_map) * map_info->num_vectors;
+
args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
args.in_args = (u8 *)map_info;
- args.in_args_size = len;
+ args.in_args_size = buf_len;
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
- rte_free(map_info);
return err;
}
-int
-iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num,
- uint16_t index)
+static int
+iavf_config_irq_map_lv_chunk(struct iavf_adapter *adapter,
+ uint16_t chunk_sz,
+ uint16_t chunk_start)
{
+ struct {
+ struct virtchnl_queue_vector_maps map_info;
+ struct virtchnl_queue_vector qv_maps[IAVF_CFG_Q_NUM_PER_BUF];
+ } chunk_req = {0};
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
- struct virtchnl_queue_vector_maps *map_info;
- struct virtchnl_queue_vector *qv_maps;
- struct iavf_cmd_info args;
- int len, i, err;
- int count = 0;
+ struct iavf_cmd_info args = {0};
+ struct virtchnl_queue_vector_maps *map_info = &chunk_req.map_info;
+ struct virtchnl_queue_vector *qv_maps = chunk_req.qv_maps;
+ size_t buf_len;
+ uint16_t i;
- len = sizeof(struct virtchnl_queue_vector_maps) +
- sizeof(struct virtchnl_queue_vector) * (num - 1);
-
- map_info = rte_zmalloc("map_info", len, 0);
- if (!map_info)
- return -ENOMEM;
+ if (chunk_sz > IAVF_CFG_Q_NUM_PER_BUF)
+ return -EINVAL;
map_info->vport_id = vf->vsi_res->vsi_id;
- map_info->num_qv_maps = num;
- for (i = index; i < index + map_info->num_qv_maps; i++) {
- qv_maps = &map_info->qv_maps[count++];
+ map_info->num_qv_maps = chunk_sz;
+ for (i = 0; i < chunk_sz; i++) {
+ qv_maps = &map_info->qv_maps[i];
qv_maps->itr_idx = VIRTCHNL_ITR_IDX_0;
qv_maps->queue_type = VIRTCHNL_QUEUE_TYPE_RX;
- qv_maps->queue_id = vf->qv_map[i].queue_id;
- qv_maps->vector_id = vf->qv_map[i].vector_id;
+ qv_maps->queue_id = vf->qv_map[chunk_start + i].queue_id;
+ qv_maps->vector_id = vf->qv_map[chunk_start + i].vector_id;
}
+ /* for some reason PF side checks for buffer being too big, so adjust it down */
+ buf_len = sizeof(struct virtchnl_queue_vector_maps) +
+ sizeof(struct virtchnl_queue_vector) * chunk_sz;
+
args.ops = VIRTCHNL_OP_MAP_QUEUE_VECTOR;
args.in_args = (u8 *)map_info;
- args.in_args_size = len;
+ args.in_args_size = buf_len;
args.out_buffer = vf->aq_resp;
args.out_size = IAVF_AQ_BUF_SZ;
- err = iavf_execute_vf_cmd_safe(adapter, &args, 0);
- if (err)
- PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR");
- rte_free(map_info);
- return err;
+ return iavf_execute_vf_cmd_safe(adapter, &args, 0);
+}
+
+int
+iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num)
+{
+ uint16_t c;
+ int err;
+
+ for (c = 0; c < num; c += IAVF_CFG_Q_NUM_PER_BUF) {
+ uint16_t chunk_sz = RTE_MIN(num - c, IAVF_CFG_Q_NUM_PER_BUF);
+ err = iavf_config_irq_map_lv_chunk(adapter, chunk_sz, c);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to configure irq map chunk [%u, %u)",
+ c, c + chunk_sz);
+ return err;
+ }
+ }
+ return 0;
}
void
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 23/26] net/ice: avoid rte malloc in RSS RETA operations
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (21 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 22/26] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 24/26] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
` (3 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when updating or querying RSS redirection table (RETA), we
are using rte_zmalloc followed by an immediate rte_free. This memory does
not need to be stored in hugepage memory, so replace it with stack
allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 4 ++--
drivers/net/intel/ice/ice_ethdev.c | 29 ++++++--------------------
2 files changed, 8 insertions(+), 25 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index abd7875e7b..388495d69c 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -1336,7 +1336,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc("rss_lut", reta_size, 0);
+ lut = calloc(1, reta_size);
if (!lut) {
PMD_DRV_LOG(ERR, "No memory can be allocated");
return -ENOMEM;
@@ -1356,7 +1356,7 @@ ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev,
ret = ice_dcf_configure_rss_lut(hw);
if (ret) /* revert back */
rte_memcpy(hw->rss_lut, lut, reta_size);
- rte_free(lut);
+ free(lut);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_ethdev.c b/drivers/net/intel/ice/ice_ethdev.c
index 41474b7002..0d6b030536 100644
--- a/drivers/net/intel/ice/ice_ethdev.c
+++ b/drivers/net/intel/ice/ice_ethdev.c
@@ -5564,7 +5564,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K] = {0};
int ret;
if (reta_size != ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128 &&
@@ -5581,14 +5581,9 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
/* It MUST use the current LUT size to get the RSS lookup table,
* otherwise if will fail with -100 error code.
*/
- lut = rte_zmalloc(NULL, RTE_MAX(reta_size, lut_size), 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
ret = ice_get_rss_lut(pf->main_vsi, lut, lut_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
@@ -5604,10 +5599,7 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
pf->hash_lut_size = reta_size;
}
-out:
- rte_free(lut);
-
- return ret;
+ return 0;
}
static int
@@ -5618,7 +5610,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
uint16_t i, lut_size = pf->hash_lut_size;
uint16_t idx, shift;
- uint8_t *lut;
+ uint8_t lut[ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K] = {0};
int ret;
if (reta_size != lut_size) {
@@ -5630,15 +5622,9 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
return -EINVAL;
}
- lut = rte_zmalloc(NULL, reta_size, 0);
- if (!lut) {
- PMD_DRV_LOG(ERR, "No memory can be allocated");
- return -ENOMEM;
- }
-
ret = ice_get_rss_lut(pf->main_vsi, lut, reta_size);
if (ret)
- goto out;
+ return ret;
for (i = 0; i < reta_size; i++) {
idx = i / RTE_ETH_RETA_GROUP_SIZE;
@@ -5647,10 +5633,7 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
reta_conf[idx].reta[shift] = lut[i];
}
-out:
- rte_free(lut);
-
- return ret;
+ return 0;
}
static int
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 24/26] net/ice: avoid rte malloc in MAC address operations
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (22 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 23/26] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 25/26] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
` (2 subsequent siblings)
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when adding or deleting MAC addresses, we are using
rte_zmalloc followed by an immediate rte_free. This memory does not need
to be stored in hugepage memory, so replace it with stack allocation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_dcf_ethdev.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_dcf_ethdev.c b/drivers/net/intel/ice/ice_dcf_ethdev.c
index 388495d69c..0d3599d7d0 100644
--- a/drivers/net/intel/ice/ice_dcf_ethdev.c
+++ b/drivers/net/intel/ice/ice_dcf_ethdev.c
@@ -926,19 +926,14 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
struct rte_ether_addr *mc_addrs,
uint32_t mc_addrs_num, bool add)
{
- struct virtchnl_ether_addr_list *list;
- struct dcf_virtchnl_cmd args;
+ struct {
+ struct virtchnl_ether_addr_list list;
+ struct virtchnl_ether_addr addr[DCF_NUM_MACADDR_MAX];
+ } list_req = {0};
+ struct virtchnl_ether_addr_list *list = &list_req.list;
+ struct dcf_virtchnl_cmd args = {0};
uint32_t i;
- int len, err = 0;
-
- len = sizeof(struct virtchnl_ether_addr_list);
- len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num;
-
- list = rte_zmalloc(NULL, len, 0);
- if (!list) {
- PMD_DRV_LOG(ERR, "fail to allocate memory");
- return -ENOMEM;
- }
+ int err = 0;
for (i = 0; i < mc_addrs_num; i++) {
memcpy(list->list[i].addr, mc_addrs[i].addr_bytes,
@@ -953,13 +948,12 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw,
args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
VIRTCHNL_OP_DEL_ETH_ADDR;
args.req_msg = (uint8_t *)list;
- args.req_msglen = len;
+ args.req_msglen = sizeof(list_req);
err = ice_dcf_execute_virtchnl_cmd(hw, &args);
if (err)
PMD_DRV_LOG(ERR, "fail to execute command %s",
add ? "OP_ADD_ETHER_ADDRESS" :
"OP_DEL_ETHER_ADDRESS");
- rte_free(list);
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 25/26] net/ice: avoid rte malloc in raw pattern parsing
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (23 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 24/26] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 26/26] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
2026-02-25 12:04 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Bruce Richardson
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when parsing raw flow patterns, we are using rte_zmalloc
followed by an immediate rte_free. This memory does not need to be stored
in hugepage memory, so replace it with regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/intel/ice/ice_fdir_filter.c | 14 +++++++-------
drivers/net/intel/ice/ice_hash.c | 10 +++++-----
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 1279823b12..0b92b9ab38 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -1970,13 +1970,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
pkt_len)
return -rte_errno;
- tmp_spec = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_spec = calloc(1, pkt_len / 2);
if (!tmp_spec)
return -rte_errno;
- tmp_mask = rte_zmalloc(NULL, pkt_len / 2, 0);
+ tmp_mask = calloc(1, pkt_len / 2);
if (!tmp_mask) {
- rte_free(tmp_spec);
+ free(tmp_spec);
return -rte_errno;
}
@@ -2041,13 +2041,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
filter->parser_ena = true;
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
break;
raw_error:
- rte_free(tmp_spec);
- rte_free(tmp_mask);
+ free(tmp_spec);
+ free(tmp_mask);
return ret_val;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index b20103a452..f9db530504 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -696,13 +696,13 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
pkt_len = spec_len / 2;
- pkt_buf = rte_zmalloc(NULL, pkt_len, 0);
+ pkt_buf = calloc(1, pkt_len);
if (!pkt_buf)
return -ENOMEM;
- msk_buf = rte_zmalloc(NULL, pkt_len, 0);
+ msk_buf = calloc(1, pkt_len);
if (!msk_buf) {
- rte_free(pkt_buf);
+ free(pkt_buf);
return -ENOMEM;
}
@@ -753,8 +753,8 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
rte_memcpy(&meta->raw.prof, &prof, sizeof(prof));
free_mem:
- rte_free(pkt_buf);
- rte_free(msk_buf);
+ free(pkt_buf);
+ free(msk_buf);
return ret;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* [PATCH v9 26/26] net/ice: avoid rte malloc in flow pattern match
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (24 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 25/26] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
@ 2026-02-24 15:14 ` Anatoly Burakov
2026-02-25 12:04 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Bruce Richardson
26 siblings, 0 replies; 297+ messages in thread
From: Anatoly Burakov @ 2026-02-24 15:14 UTC (permalink / raw)
To: dev, Bruce Richardson
Currently, when allocating buffers for pattern match items and flow item
storage, we are using rte_zmalloc followed by immediate rte_free. This
memory does not need to be stored in hugepage memory, so replace it with
regular malloc/free.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ice/ice_acl_filter.c | 3 ++-
drivers/net/intel/ice/ice_fdir_filter.c | 5 +++--
drivers/net/intel/ice/ice_generic_flow.c | 15 +++++++--------
drivers/net/intel/ice/ice_hash.c | 3 ++-
drivers/net/intel/ice/ice_switch_filter.c | 5 +++--
5 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c
index 38e30a4f62..6754a40044 100644
--- a/drivers/net/intel/ice/ice_acl_filter.c
+++ b/drivers/net/intel/ice/ice_acl_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1009,7 +1010,7 @@ ice_acl_parse(struct ice_adapter *ad,
*meta = filter;
error:
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c
index 0b92b9ab38..93ab803b44 100644
--- a/drivers/net/intel/ice/ice_fdir_filter.c
+++ b/drivers/net/intel/ice/ice_fdir_filter.c
@@ -3,6 +3,7 @@
*/
#include <stdio.h>
+#include <stdlib.h>
#include <rte_flow.h>
#include <rte_hash.h>
#include <rte_hash_crc.h>
@@ -2845,11 +2846,11 @@ ice_fdir_parse(struct ice_adapter *ad,
rte_free(filter->pkt_buf);
}
- rte_free(item);
+ free(item);
return ret;
error:
rte_free(filter->pkt_buf);
- rte_free(item);
+ free(item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c
index 644958cccf..62f0c334a1 100644
--- a/drivers/net/intel/ice/ice_generic_flow.c
+++ b/drivers/net/intel/ice/ice_generic_flow.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -2313,19 +2314,17 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
}
item_num++;
- items = rte_zmalloc("ice_pattern",
- item_num * sizeof(struct rte_flow_item), 0);
+ items = calloc(item_num, sizeof(struct rte_flow_item));
if (!items) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
NULL, "No memory for PMD internal items.");
return NULL;
}
- pattern_match_item = rte_zmalloc("ice_pattern_match_item",
- sizeof(struct ice_pattern_match_item), 0);
+ pattern_match_item = calloc(1, sizeof(struct ice_pattern_match_item));
if (!pattern_match_item) {
rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE,
NULL, "Failed to allocate memory.");
- rte_free(items);
+ free(items);
return NULL;
}
@@ -2344,7 +2343,7 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
pattern_match_item->pattern_list =
array[i].pattern_list;
pattern_match_item->meta = array[i].meta;
- rte_free(items);
+ free(items);
return pattern_match_item;
}
}
@@ -2352,8 +2351,8 @@ ice_search_pattern_match_item(struct ice_adapter *ad,
unsupported:
rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "Unsupported pattern");
- rte_free(items);
- rte_free(pattern_match_item);
+ free(items);
+ free(pattern_match_item);
return NULL;
}
diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c
index f9db530504..77829e607b 100644
--- a/drivers/net/intel/ice/ice_hash.c
+++ b/drivers/net/intel/ice/ice_hash.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
@@ -1236,7 +1237,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad,
*meta = rss_meta_ptr;
else
rte_free(rss_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return ret;
}
diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c
index 28bc775a2c..b25e5eaad3 100644
--- a/drivers/net/intel/ice/ice_switch_filter.c
+++ b/drivers/net/intel/ice/ice_switch_filter.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <unistd.h>
#include <stdarg.h>
+#include <stdlib.h>
#include <rte_debug.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
@@ -1877,14 +1878,14 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
rte_free(sw_meta_ptr);
}
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return 0;
error:
rte_free(list);
rte_free(sw_meta_ptr);
- rte_free(pattern_match_item);
+ free(pattern_match_item);
return -rte_errno;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 297+ messages in thread
* Re: [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
` (25 preceding siblings ...)
2026-02-24 15:14 ` [PATCH v9 26/26] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
@ 2026-02-25 12:04 ` Bruce Richardson
26 siblings, 0 replies; 297+ messages in thread
From: Bruce Richardson @ 2026-02-25 12:04 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Tue, Feb 24, 2026 at 03:14:05PM +0000, Anatoly Burakov wrote:
> This patchset is an assortment of cleanups for ixgbe, i40e, iavf, and ice PMD.
>
> IXGBE:
>
> - Remove unnecessary macros and #ifdef's
> - Disentangle unrelated flow API code paths
>
> I40E:
>
> - Get rid of global variables and unnecessary allocations
> - Reduce code duplication around default RSS keys
> - Use more appropriate integer types and definitions
>
> IAVF:
>
> - Remove dead code
> - Remove unnecessary allocations
> - Separate RSS uninit from hash flow parser uninit
>
> ICE:
>
> - Remove unnecessary allocations
>
> This is done in preparation for further rework.
>
> Note that this patchset depends on driver bug fix patchset [1] as well as an
> IPsec struct fix [2] (both already integrated into next-net-intel).
>
> [1] https://patches.dpdk.org/project/dpdk/list/?series=37350
> [2] https://patches.dpdk.org/project/dpdk/patch/c87355f75826ec90a506dc8d4548e3f6af2b7e93.1771581658.git.anatoly.burakov@intel.com/
>
Series-Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Applied to next-net-intel with the following small adjustments:
* fixed the build issues due to missing stdlib.h include for calloc/free
calls
* merged the various fixes removing rte_malloc calls so that we just have
one commit per driver for this, cutting the set down to 13 patches.
Thanks for all the effort on this cleanup.
/Bruce
^ permalink raw reply [flat|nested] 297+ messages in thread
end of thread, other threads:[~2026-02-25 12:04 UTC | newest]
Thread overview: 297+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-09 14:13 [PATCH v1 00/12] Cleanups for ixgbe, i40e, and iavf PMD's Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 01/12] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 02/12] net/ixgbe: remove security-related ifdefery Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 03/12] net/ixgbe: split security and ntuple filters Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 04/12] net/i40e: get rid of global filter variables Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 05/12] net/i40e: use stack allocations for tunnel set Anatoly Burakov
2026-02-10 10:56 ` Burakov, Anatoly
2026-02-09 14:13 ` [PATCH v1 06/12] net/i40e: make default RSS key global Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 07/12] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 08/12] net/i40e: use proper flex len define Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 09/12] net/i40e: remove global pattern variable Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 10/12] net/iavf: remove remnants of pipeline mode Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 11/12] net/iavf: do not use malloc in crypto VF commands Anatoly Burakov
2026-02-09 14:13 ` [PATCH v1 12/12] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 02/26] net/ixgbe: remove security-related ifdefery Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 03/26] net/ixgbe: split security and ntuple filters Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 04/26] net/i40e: get rid of global filter variables Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 05/26] net/i40e: make default RSS key global Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 06/26] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 07/26] net/i40e: use proper flex len define Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 08/26] net/i40e: remove global pattern variable Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 09/26] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 10/26] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 12/26] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 13/26] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 14/26] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 15/26] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 16/26] net/iavf: remove remnants of pipeline mode Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 17/26] net/iavf: do not use malloc in crypto VF commands Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 18/26] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 19/26] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 20/26] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 21/26] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 22/26] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 23/26] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 24/26] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 25/26] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-10 16:13 ` [PATCH v2 26/26] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-11 17:54 ` Medvedkin, Vladimir
2026-02-11 13:52 ` [PATCH v3 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
2026-02-11 17:56 ` Medvedkin, Vladimir
2026-02-11 13:52 ` [PATCH v3 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
2026-02-11 19:25 ` Medvedkin, Vladimir
2026-02-12 9:42 ` Burakov, Anatoly
2026-02-11 13:52 ` [PATCH v3 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 05/27] net/i40e: make default RSS key global Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
2026-02-11 21:03 ` Morten Brørup
2026-02-13 10:30 ` Burakov, Anatoly
2026-02-11 13:52 ` [PATCH v3 07/27] net/i40e: use proper flex len define Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 08/27] net/i40e: remove global pattern variable Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-16 17:06 ` Bruce Richardson
2026-02-17 12:32 ` Burakov, Anatoly
2026-02-17 12:46 ` Bruce Richardson
2026-02-17 14:38 ` Stephen Hemminger
2026-02-11 13:52 ` [PATCH v3 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
2026-02-16 17:08 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
2026-02-16 17:09 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
2026-02-16 17:11 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
2026-02-16 17:12 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
2026-02-16 17:13 ` Bruce Richardson
2026-02-11 13:52 ` [PATCH v3 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
2026-02-11 13:52 ` [PATCH v3 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 18/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-11 13:53 ` [PATCH v3 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-13 10:44 ` Burakov, Anatoly
2026-02-16 16:58 ` Bruce Richardson
2026-02-17 12:50 ` Burakov, Anatoly
2026-02-17 12:58 ` Bruce Richardson
2026-02-17 14:23 ` Burakov, Anatoly
2026-02-17 15:32 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
2026-02-13 10:44 ` Burakov, Anatoly
2026-02-13 10:26 ` [PATCH v4 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 05/27] net/i40e: make default RSS key global Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 07/27] net/i40e: use proper flex len define Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 08/27] net/i40e: remove global pattern variable Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-13 10:26 ` [PATCH v4 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
2026-02-16 17:14 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
2026-02-16 17:14 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
2026-02-16 17:15 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
2026-02-16 17:15 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
2026-02-16 17:15 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
2026-02-16 17:16 ` Bruce Richardson
2026-02-17 12:51 ` Burakov, Anatoly
2026-02-17 13:00 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
2026-02-16 17:19 ` Bruce Richardson
2026-02-16 22:25 ` Stephen Hemminger
2026-02-13 10:26 ` [PATCH v4 18/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
2026-02-16 17:23 ` Bruce Richardson
2026-02-18 10:32 ` Burakov, Anatoly
2026-02-13 10:26 ` [PATCH v4 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
2026-02-16 17:24 ` Bruce Richardson
2026-02-18 10:45 ` Burakov, Anatoly
2026-02-13 10:26 ` [PATCH v4 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-16 17:27 ` Bruce Richardson
2026-02-19 9:22 ` Burakov, Anatoly
2026-02-19 9:29 ` Bruce Richardson
2026-02-19 9:32 ` Bruce Richardson
2026-02-19 9:39 ` Burakov, Anatoly
2026-02-19 13:21 ` Burakov, Anatoly
2026-02-13 10:26 ` [PATCH v4 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
2026-02-16 17:30 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
2026-02-16 17:31 ` Bruce Richardson
2026-02-16 22:24 ` Stephen Hemminger
2026-02-13 10:26 ` [PATCH v4 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
2026-02-16 17:31 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-16 17:32 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-16 17:33 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-16 17:34 ` Bruce Richardson
2026-02-13 10:26 ` [PATCH v4 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
2026-02-16 17:37 ` Bruce Richardson
2026-02-19 13:07 ` Burakov, Anatoly
2026-02-17 12:13 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-17 12:13 ` [PATCH v5 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-17 12:13 ` [PATCH v5 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
2026-02-17 12:13 ` [PATCH v5 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
2026-02-17 16:59 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
2026-02-17 16:59 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 05/27] net/i40e: make default RSS key global Anatoly Burakov
2026-02-17 17:06 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
2026-02-17 17:09 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 07/27] net/i40e: use proper flex len define Anatoly Burakov
2026-02-17 17:10 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 08/27] net/i40e: remove global pattern variable Anatoly Burakov
2026-02-17 17:15 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
2026-02-17 17:24 ` Medvedkin, Vladimir
2026-02-17 12:13 ` [PATCH v5 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 17/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 18/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-17 12:14 ` [PATCH v5 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
2026-02-17 13:00 ` [PATCH v5 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Burakov, Anatoly
2026-02-19 16:22 ` [PATCH v6 " Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 05/27] net/i40e: make default RSS key global Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 07/27] net/i40e: use proper flex len define Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 08/27] net/i40e: remove global pattern variable Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
2026-02-19 16:22 ` [PATCH v6 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 17/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 18/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-19 16:23 ` [PATCH v6 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
2026-02-19 19:08 ` [PATCH v6 00/27] Cleanups for ixgbe, i40e, iavf, and ice PMD's Stephen Hemminger
2026-02-20 9:50 ` Burakov, Anatoly
2026-02-20 10:14 ` [PATCH v7 " Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 01/27] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 02/27] net/ixgbe: remove security-related ifdefery Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 03/27] net/ixgbe: split security and ntuple filters Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 04/27] net/i40e: get rid of global filter variables Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 05/27] net/i40e: make default RSS key global Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 06/27] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 07/27] net/i40e: use proper flex len define Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 08/27] net/i40e: remove global pattern variable Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 09/27] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 10/27] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 11/27] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 12/27] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 13/27] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 14/27] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 15/27] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 16/27] net/iavf: remove remnants of pipeline mode Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 17/27] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 18/27] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 19/27] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 20/27] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 21/27] net/iavf: avoid rte malloc in IPsec operations Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 22/27] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 23/27] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 24/27] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 25/27] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 26/27] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-20 10:14 ` [PATCH v7 27/27] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 02/26] net/ixgbe: remove security-related ifdefery Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 03/26] net/ixgbe: split security and ntuple filters Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 04/26] net/i40e: get rid of global filter variables Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 05/26] net/i40e: make default RSS key global Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 06/26] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 07/26] net/i40e: use proper flex len define Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 08/26] net/i40e: remove global pattern variable Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 09/26] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 10/26] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 12/26] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 13/26] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 14/26] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 15/26] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 16/26] net/iavf: remove remnants of pipeline mode Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 17/26] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 18/26] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 19/26] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 20/26] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 21/26] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 22/26] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 23/26] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 24/26] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 25/26] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-24 12:23 ` [PATCH v8 26/26] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 01/26] net/ixgbe: remove MAC type check macros Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 02/26] net/ixgbe: remove security-related ifdefery Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 03/26] net/ixgbe: split security and ntuple filters Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 04/26] net/i40e: get rid of global filter variables Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 05/26] net/i40e: make default RSS key global Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 06/26] net/i40e: use unsigned types for queue comparisons Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 07/26] net/i40e: use proper flex len define Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 08/26] net/i40e: remove global pattern variable Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 09/26] net/i40e: avoid rte malloc in tunnel set Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 10/26] net/i40e: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 11/26] net/i40e: avoid rte malloc in MAC/VLAN filtering Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 12/26] net/i40e: avoid rte malloc in VF resource queries Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 13/26] net/i40e: avoid rte malloc in adminq operations Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 14/26] net/i40e: avoid rte malloc in DDP package handling Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 15/26] net/i40e: avoid rte malloc in DDP ptype handling Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 16/26] net/iavf: remove remnants of pipeline mode Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 17/26] net/iavf: decouple hash uninit from parser uninit Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 18/26] net/iavf: avoid rte malloc in VF mailbox for IPsec Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 19/26] net/iavf: avoid rte malloc in RSS configuration Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 20/26] net/iavf: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 21/26] net/iavf: avoid rte malloc in queue operations Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 22/26] net/iavf: avoid rte malloc in irq map config Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 23/26] net/ice: avoid rte malloc in RSS RETA operations Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 24/26] net/ice: avoid rte malloc in MAC address operations Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 25/26] net/ice: avoid rte malloc in raw pattern parsing Anatoly Burakov
2026-02-24 15:14 ` [PATCH v9 26/26] net/ice: avoid rte malloc in flow pattern match Anatoly Burakov
2026-02-25 12:04 ` [PATCH v9 00/26] Cleanups for ixgbe, i40e, iavf, and ice PMD's Bruce Richardson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox