DPDK-dev Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 00/15] IXGBE fixes and cleanups
@ 2026-04-30 11:14 Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 01/15] net/ixgbe: fix flows not being scoped to port Anatoly Burakov
                   ` (15 more replies)
  0 siblings, 16 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev

This patchset fixes a bunch of very old issues in flow management
in IXGBE driver, such as:

- storing process-local pointers in shared memory
- incorrect L4 protocol matching for FDIR
- wrong handling of SCTP flow items
- reading stale FDIR state after flow destroy/flush
- storing all flows in global lists

In addition, some cleanup is also performed - refactors, moving
things around to avoid accessing process-local state, and
writing read-only values at init instead of deep in FDIR.

Finally, FDIR was also rejecting protocol-only matches for TCP
and UDP, these are now supported.

Depends on flow dump patchset: https://patches.dpdk.org/project/dpdk/list/?series=38016

Anatoly Burakov (15):
  net/ixgbe: fix flows not being scoped to port
  net/ixgbe: fix shared PF pointer in representor
  net/ixgbe: fix non-shared data in IPsec session
  net/ixgbe: fix SCTP protocol-only flow parsing
  net/ixgbe: fix L4 protocol mask handling
  net/ixgbe: reset flow state on clear paths
  net/ixgbe: store max VFs in adapter
  net/ixgbe: do not use flow list to count flows
  net/ixgbe: remove redundant flow tracking lists
  net/ixgbe: reduce FDIR conf macro usage
  net/ixgbe: use adapter in flow-related calls
  net/ixgbe: support protocol-only TCP and UDP rules
  net/ixgbe: write drop queue at init
  net/ixgbe: rely less on global flow state
  net/ixgbe: refactor flow creation

 drivers/net/intel/ixgbe/ixgbe_ethdev.c        | 112 +--
 drivers/net/intel/ixgbe/ixgbe_ethdev.h        |  45 +-
 drivers/net/intel/ixgbe/ixgbe_fdir.c          | 240 +++---
 drivers/net/intel/ixgbe/ixgbe_flow.c          | 716 +++++++++---------
 drivers/net/intel/ixgbe/ixgbe_ipsec.c         |  10 +-
 drivers/net/intel/ixgbe/ixgbe_ipsec.h         |   3 +-
 drivers/net/intel/ixgbe/ixgbe_rxtx.c          |  10 +-
 .../net/intel/ixgbe/ixgbe_vf_representor.c    |  63 +-
 8 files changed, 590 insertions(+), 609 deletions(-)

-- 
2.47.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v1 01/15] net/ixgbe: fix flows not being scoped to port
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 02/15] net/ixgbe: fix shared PF pointer in representor Anatoly Burakov
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin, Wei Dai, Beilei Xing, Wenzhuo Lu,
	Wei Zhao

Currently, ixgbe keeps `rte_flow`` software state in global static lists
without tracking which port created each rule. As a result, a rule
removal request can affect rules that belong to another port (for example,
a flow flush command will drop *all* flows from *all* ixgbe ports).

Fix this by moving the flow lists to adapter structure.

Fixes: 72c135a89f80 ("net/ixgbe: create consistent filter")
Cc: wei.zhao1@intel.com
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Depends-on: series-38016 ("Implement flow dump")

 drivers/net/intel/ixgbe/ixgbe_ethdev.c |   4 +-
 drivers/net/intel/ixgbe/ixgbe_ethdev.h |  21 ++-
 drivers/net/intel/ixgbe/ixgbe_flow.c   | 247 +++++++++++--------------
 3 files changed, 126 insertions(+), 146 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 57d929cf2c..9454cbee0a 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -1334,7 +1334,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		goto err_l2_tn_filter_init;
 
 	/* initialize flow filter lists */
-	ixgbe_filterlist_init();
+	ixgbe_filterlist_init(eth_dev);
 
 	/* initialize bandwidth configuration info */
 	memset(bw_conf, 0, sizeof(struct ixgbe_bw_conf));
@@ -3137,7 +3137,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
 	ixgbe_ntuple_filter_uninit(dev);
 
 	/* clear all the filters list */
-	ixgbe_filterlist_flush();
+	ixgbe_filterlist_flush(dev);
 
 	/* Remove all Traffic Manager configuration */
 	ixgbe_tm_conf_uninit(dev);
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 3f00896943..38d476d309 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -455,6 +455,19 @@ struct ixgbe_tm_conf {
 	bool committed;
 };
 
+struct ixgbe_filter_ele_base;
+TAILQ_HEAD(ixgbe_filter_ele_list, ixgbe_filter_ele_base);
+
+struct ixgbe_flow_lists {
+	struct ixgbe_filter_ele_list ntuple_list;
+	struct ixgbe_filter_ele_list ethertype_list;
+	struct ixgbe_filter_ele_list syn_list;
+	struct ixgbe_filter_ele_list fdir_list;
+	struct ixgbe_filter_ele_list l2_tunnel_list;
+	struct ixgbe_filter_ele_list rss_list;
+	struct ixgbe_filter_ele_list flow_list;
+};
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -476,6 +489,7 @@ struct ixgbe_adapter {
 	struct ixgbe_bypass_info    bps;
 #endif /* RTE_LIBRTE_IXGBE_BYPASS */
 	struct ixgbe_filter_info    filter;
+	struct ixgbe_flow_lists     flow_lists;
 	struct ixgbe_l2_tn_info     l2_tn;
 	struct ixgbe_bw_conf        bw_conf;
 	struct ixgbe_ipsec          ipsec;
@@ -560,6 +574,9 @@ uint16_t ixgbe_vf_representor_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts
 #define IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter) \
 	(&((struct ixgbe_adapter *)adapter)->filter)
 
+#define IXGBE_DEV_PRIVATE_TO_FLOW_LISTS(adapter) \
+	(&((struct ixgbe_adapter *)adapter)->flow_lists)
+
 #define IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter) \
 	(&((struct ixgbe_adapter *)adapter)->l2_tn)
 
@@ -690,8 +707,8 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 int
 ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 			       struct ixgbe_l2_tunnel_conf *l2_tunnel);
-void ixgbe_filterlist_init(void);
-void ixgbe_filterlist_flush(void);
+void ixgbe_filterlist_init(struct rte_eth_dev *dev);
+void ixgbe_filterlist_flush(struct rte_eth_dev *dev);
 /*
  * Flow director function prototypes
  */
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index e40ece9755..c3abba4a90 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -24,6 +24,7 @@
 #include <rte_eal.h>
 #include <rte_alarm.h>
 #include <rte_ether.h>
+#include <rte_tailq.h>
 #include <ethdev_driver.h>
 #include <rte_malloc.h>
 #include <rte_random.h>
@@ -50,58 +51,46 @@
 #define IXGBE_MAX_N_TUPLE_PRIO 7
 #define IXGBE_MAX_FLX_SOURCE_OFF 62
 
+struct ixgbe_filter_ele_base {
+	TAILQ_ENTRY(ixgbe_filter_ele_base) entries;
+};
+
 /* ntuple filter list structure */
 struct ixgbe_ntuple_filter_ele {
-	TAILQ_ENTRY(ixgbe_ntuple_filter_ele) entries;
+	struct ixgbe_filter_ele_base base;
 	struct rte_eth_ntuple_filter filter_info;
 };
 /* ethertype filter list structure */
 struct ixgbe_ethertype_filter_ele {
-	TAILQ_ENTRY(ixgbe_ethertype_filter_ele) entries;
+	struct ixgbe_filter_ele_base base;
 	struct rte_eth_ethertype_filter filter_info;
 };
 /* syn filter list structure */
 struct ixgbe_eth_syn_filter_ele {
-	TAILQ_ENTRY(ixgbe_eth_syn_filter_ele) entries;
+	struct ixgbe_filter_ele_base base;
 	struct rte_eth_syn_filter filter_info;
 };
 /* fdir filter list structure */
 struct ixgbe_fdir_rule_ele {
-	TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
+	struct ixgbe_filter_ele_base base;
 	struct ixgbe_fdir_rule filter_info;
 };
 /* l2_tunnel filter list structure */
 struct ixgbe_eth_l2_tunnel_conf_ele {
-	TAILQ_ENTRY(ixgbe_eth_l2_tunnel_conf_ele) entries;
+	struct ixgbe_filter_ele_base base;
 	struct ixgbe_l2_tunnel_conf filter_info;
 };
 /* rss filter list structure */
 struct ixgbe_rss_conf_ele {
-	TAILQ_ENTRY(ixgbe_rss_conf_ele) entries;
+	struct ixgbe_filter_ele_base base;
 	struct ixgbe_rte_flow_rss_conf filter_info;
 };
 /* ixgbe_flow memory list structure */
 struct ixgbe_flow_mem {
-	TAILQ_ENTRY(ixgbe_flow_mem) entries;
+	struct ixgbe_filter_ele_base base;
 	struct rte_flow *flow;
 };
 
-TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele);
-TAILQ_HEAD(ixgbe_ethertype_filter_list, ixgbe_ethertype_filter_ele);
-TAILQ_HEAD(ixgbe_syn_filter_list, ixgbe_eth_syn_filter_ele);
-TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
-TAILQ_HEAD(ixgbe_l2_tunnel_filter_list, ixgbe_eth_l2_tunnel_conf_ele);
-TAILQ_HEAD(ixgbe_rss_filter_list, ixgbe_rss_conf_ele);
-TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
-
-static struct ixgbe_ntuple_filter_list filter_ntuple_list;
-static struct ixgbe_ethertype_filter_list filter_ethertype_list;
-static struct ixgbe_syn_filter_list filter_syn_list;
-static struct ixgbe_fdir_rule_filter_list filter_fdir_list;
-static struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
-static struct ixgbe_rss_filter_list filter_rss_list;
-static struct ixgbe_flow_mem_list ixgbe_flow_list;
-
 /**
  * Endless loop will never happen with below assumption
  * 1. there is at least one no-void item(END)
@@ -3014,76 +3003,52 @@ ixgbe_clear_rss_filter(struct rte_eth_dev *dev)
 }
 
 void
-ixgbe_filterlist_init(void)
+ixgbe_filterlist_init(struct rte_eth_dev *dev)
 {
-	TAILQ_INIT(&filter_ntuple_list);
-	TAILQ_INIT(&filter_ethertype_list);
-	TAILQ_INIT(&filter_syn_list);
-	TAILQ_INIT(&filter_fdir_list);
-	TAILQ_INIT(&filter_l2_tunnel_list);
-	TAILQ_INIT(&filter_rss_list);
-	TAILQ_INIT(&ixgbe_flow_list);
+	struct ixgbe_flow_lists *flow_lists =
+		IXGBE_DEV_PRIVATE_TO_FLOW_LISTS(dev->data->dev_private);
+
+	TAILQ_INIT(&flow_lists->ntuple_list);
+	TAILQ_INIT(&flow_lists->ethertype_list);
+	TAILQ_INIT(&flow_lists->syn_list);
+	TAILQ_INIT(&flow_lists->fdir_list);
+	TAILQ_INIT(&flow_lists->l2_tunnel_list);
+	TAILQ_INIT(&flow_lists->rss_list);
+	TAILQ_INIT(&flow_lists->flow_list);
+}
+
+static void
+ixgbe_filter_flush(struct ixgbe_filter_ele_list *list)
+{
+	struct ixgbe_filter_ele_base *ele, *tmp;
+
+	RTE_TAILQ_FOREACH_SAFE(ele, list, entries, tmp) {
+		TAILQ_REMOVE(list, ele, entries);
+		rte_free(ele);
+	}
 }
 
 void
-ixgbe_filterlist_flush(void)
+ixgbe_filterlist_flush(struct rte_eth_dev *dev)
 {
-	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
-	struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
-	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
-	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
-	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
-	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
-	struct ixgbe_rss_conf_ele *rss_filter_ptr;
+	struct ixgbe_flow_lists *flow_lists =
+		IXGBE_DEV_PRIVATE_TO_FLOW_LISTS(dev->data->dev_private);
+	struct ixgbe_filter_ele_base *ele, *tmp;
 
-	while ((ntuple_filter_ptr = TAILQ_FIRST(&filter_ntuple_list))) {
-		TAILQ_REMOVE(&filter_ntuple_list,
-				 ntuple_filter_ptr,
-				 entries);
-		rte_free(ntuple_filter_ptr);
-	}
+	ixgbe_filter_flush(&flow_lists->ntuple_list);
+	ixgbe_filter_flush(&flow_lists->ethertype_list);
+	ixgbe_filter_flush(&flow_lists->syn_list);
+	ixgbe_filter_flush(&flow_lists->l2_tunnel_list);
+	ixgbe_filter_flush(&flow_lists->fdir_list);
+	ixgbe_filter_flush(&flow_lists->rss_list);
 
-	while ((ethertype_filter_ptr = TAILQ_FIRST(&filter_ethertype_list))) {
-		TAILQ_REMOVE(&filter_ethertype_list,
-				 ethertype_filter_ptr,
-				 entries);
-		rte_free(ethertype_filter_ptr);
-	}
+	RTE_TAILQ_FOREACH_SAFE(ele, &flow_lists->flow_list, entries, tmp) {
+		struct ixgbe_flow_mem *ixgbe_flow_mem_ptr =
+			(struct ixgbe_flow_mem *)ele;
 
-	while ((syn_filter_ptr = TAILQ_FIRST(&filter_syn_list))) {
-		TAILQ_REMOVE(&filter_syn_list,
-				 syn_filter_ptr,
-				 entries);
-		rte_free(syn_filter_ptr);
-	}
-
-	while ((l2_tn_filter_ptr = TAILQ_FIRST(&filter_l2_tunnel_list))) {
-		TAILQ_REMOVE(&filter_l2_tunnel_list,
-				 l2_tn_filter_ptr,
-				 entries);
-		rte_free(l2_tn_filter_ptr);
-	}
-
-	while ((fdir_rule_ptr = TAILQ_FIRST(&filter_fdir_list))) {
-		TAILQ_REMOVE(&filter_fdir_list,
-				 fdir_rule_ptr,
-				 entries);
-		rte_free(fdir_rule_ptr);
-	}
-
-	while ((rss_filter_ptr = TAILQ_FIRST(&filter_rss_list))) {
-		TAILQ_REMOVE(&filter_rss_list,
-				 rss_filter_ptr,
-				 entries);
-		rte_free(rss_filter_ptr);
-	}
-
-	while ((ixgbe_flow_mem_ptr = TAILQ_FIRST(&ixgbe_flow_list))) {
-		TAILQ_REMOVE(&ixgbe_flow_list,
-				 ixgbe_flow_mem_ptr,
-				 entries);
+		TAILQ_REMOVE(&flow_lists->flow_list, ele, entries);
 		rte_free(ixgbe_flow_mem_ptr->flow);
-		rte_free(ixgbe_flow_mem_ptr);
+		rte_free(ele);
 	}
 }
 
@@ -3117,6 +3082,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
 	struct ixgbe_rss_conf_ele *rss_filter_ptr;
 	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+	struct ixgbe_flow_lists *flow_lists =
+		IXGBE_DEV_PRIVATE_TO_FLOW_LISTS(dev->data->dev_private);
 	uint8_t first_mask = FALSE;
 
 	flow = rte_zmalloc("ixgbe_rte_flow", sizeof(struct rte_flow), 0);
@@ -3132,8 +3099,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 		return NULL;
 	}
 	ixgbe_flow_mem_ptr->flow = flow;
-	TAILQ_INSERT_TAIL(&ixgbe_flow_list,
-				ixgbe_flow_mem_ptr, entries);
+	TAILQ_INSERT_TAIL(&flow_lists->flow_list,
+				&ixgbe_flow_mem_ptr->base, entries);
 
 	/**
 	 *  Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
@@ -3160,8 +3127,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			rte_memcpy(&ntuple_filter_ptr->filter_info,
 				&ntuple_filter,
 				sizeof(struct rte_eth_ntuple_filter));
-			TAILQ_INSERT_TAIL(&filter_ntuple_list,
-				ntuple_filter_ptr, entries);
+			TAILQ_INSERT_TAIL(&flow_lists->ntuple_list,
+				&ntuple_filter_ptr->base, entries);
 			flow->rule = ntuple_filter_ptr;
 			flow->filter_type = RTE_ETH_FILTER_NTUPLE;
 			return flow;
@@ -3186,8 +3153,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			rte_memcpy(&ethertype_filter_ptr->filter_info,
 				&ethertype_filter,
 				sizeof(struct rte_eth_ethertype_filter));
-			TAILQ_INSERT_TAIL(&filter_ethertype_list,
-				ethertype_filter_ptr, entries);
+			TAILQ_INSERT_TAIL(&flow_lists->ethertype_list,
+				&ethertype_filter_ptr->base, entries);
 			flow->rule = ethertype_filter_ptr;
 			flow->filter_type = RTE_ETH_FILTER_ETHERTYPE;
 			return flow;
@@ -3210,8 +3177,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			rte_memcpy(&syn_filter_ptr->filter_info,
 				&syn_filter,
 				sizeof(struct rte_eth_syn_filter));
-			TAILQ_INSERT_TAIL(&filter_syn_list,
-				syn_filter_ptr,
+			TAILQ_INSERT_TAIL(&flow_lists->syn_list,
+				&syn_filter_ptr->base,
 				entries);
 			flow->rule = syn_filter_ptr;
 			flow->filter_type = RTE_ETH_FILTER_SYN;
@@ -3273,8 +3240,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 				rte_memcpy(&fdir_rule_ptr->filter_info,
 					&fdir_rule,
 					sizeof(struct ixgbe_fdir_rule));
-				TAILQ_INSERT_TAIL(&filter_fdir_list,
-					fdir_rule_ptr, entries);
+				TAILQ_INSERT_TAIL(&flow_lists->fdir_list,
+					&fdir_rule_ptr->base, entries);
 				flow->rule = fdir_rule_ptr;
 				flow->filter_type = RTE_ETH_FILTER_FDIR;
 
@@ -3310,8 +3277,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			rte_memcpy(&l2_tn_filter_ptr->filter_info,
 				&l2_tn_filter,
 				sizeof(struct ixgbe_l2_tunnel_conf));
-			TAILQ_INSERT_TAIL(&filter_l2_tunnel_list,
-				l2_tn_filter_ptr, entries);
+			TAILQ_INSERT_TAIL(&flow_lists->l2_tunnel_list,
+				&l2_tn_filter_ptr->base, entries);
 			flow->rule = l2_tn_filter_ptr;
 			flow->filter_type = RTE_ETH_FILTER_L2_TUNNEL;
 			return flow;
@@ -3332,8 +3299,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			}
 			ixgbe_rss_conf_init(&rss_filter_ptr->filter_info,
 					    &rss_conf.conf);
-			TAILQ_INSERT_TAIL(&filter_rss_list,
-				rss_filter_ptr, entries);
+			TAILQ_INSERT_TAIL(&flow_lists->rss_list,
+				&rss_filter_ptr->base, entries);
 			flow->rule = rss_filter_ptr;
 			flow->filter_type = RTE_ETH_FILTER_HASH;
 			return flow;
@@ -3341,8 +3308,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	}
 
 out:
-	TAILQ_REMOVE(&ixgbe_flow_list,
-		ixgbe_flow_mem_ptr, entries);
+	TAILQ_REMOVE(&flow_lists->flow_list,
+		&ixgbe_flow_mem_ptr->base, entries);
 	rte_flow_error_set(error, -ret,
 			   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
 			   "Failed to create flow.");
@@ -3434,11 +3401,27 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 	struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
 	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
 	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
-	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
+	struct ixgbe_filter_ele_base *flow_mem_base;
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	struct ixgbe_flow_lists *flow_lists =
+		IXGBE_DEV_PRIVATE_TO_FLOW_LISTS(dev->data->dev_private);
 	struct ixgbe_rss_conf_ele *rss_filter_ptr;
 
+	/* Validate ownership before touching HW/SW state. */
+	TAILQ_FOREACH(flow_mem_base, &flow_lists->flow_list, entries) {
+		struct ixgbe_flow_mem *ixgbe_flow_mem_ptr =
+			(struct ixgbe_flow_mem *)flow_mem_base;
+
+		if (ixgbe_flow_mem_ptr->flow == pmd_flow)
+			break;
+	}
+	if (flow_mem_base == NULL) {
+		return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				"Flow not found for this port");
+	}
+
 	/* Special case for SECURITY flows */
 	if (flow->is_security) {
 		ret = 0;
@@ -3454,8 +3437,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 			sizeof(struct rte_eth_ntuple_filter));
 		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
 		if (!ret) {
-			TAILQ_REMOVE(&filter_ntuple_list,
-			ntuple_filter_ptr, entries);
+			TAILQ_REMOVE(&flow_lists->ntuple_list,
+				&ntuple_filter_ptr->base, entries);
 			rte_free(ntuple_filter_ptr);
 		}
 		break;
@@ -3468,8 +3451,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		ret = ixgbe_add_del_ethertype_filter(dev,
 				&ethertype_filter, FALSE);
 		if (!ret) {
-			TAILQ_REMOVE(&filter_ethertype_list,
-				ethertype_filter_ptr, entries);
+			TAILQ_REMOVE(&flow_lists->ethertype_list,
+				&ethertype_filter_ptr->base, entries);
 			rte_free(ethertype_filter_ptr);
 		}
 		break;
@@ -3481,8 +3464,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 			sizeof(struct rte_eth_syn_filter));
 		ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE);
 		if (!ret) {
-			TAILQ_REMOVE(&filter_syn_list,
-				syn_filter_ptr, entries);
+			TAILQ_REMOVE(&flow_lists->syn_list,
+				&syn_filter_ptr->base, entries);
 			rte_free(syn_filter_ptr);
 		}
 		break;
@@ -3493,10 +3476,10 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 			sizeof(struct ixgbe_fdir_rule));
 		ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
 		if (!ret) {
-			TAILQ_REMOVE(&filter_fdir_list,
-				fdir_rule_ptr, entries);
+			TAILQ_REMOVE(&flow_lists->fdir_list,
+				&fdir_rule_ptr->base, entries);
 			rte_free(fdir_rule_ptr);
-			if (TAILQ_EMPTY(&filter_fdir_list))
+			if (TAILQ_EMPTY(&flow_lists->fdir_list))
 				fdir_info->mask_added = false;
 		}
 		break;
@@ -3507,8 +3490,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 			sizeof(struct ixgbe_l2_tunnel_conf));
 		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
 		if (!ret) {
-			TAILQ_REMOVE(&filter_l2_tunnel_list,
-				l2_tn_filter_ptr, entries);
+			TAILQ_REMOVE(&flow_lists->l2_tunnel_list,
+				&l2_tn_filter_ptr->base, entries);
 			rte_free(l2_tn_filter_ptr);
 		}
 		break;
@@ -3518,8 +3501,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		ret = ixgbe_config_rss_filter(dev,
 					&rss_filter_ptr->filter_info, FALSE);
 		if (!ret) {
-			TAILQ_REMOVE(&filter_rss_list,
-				rss_filter_ptr, entries);
+			TAILQ_REMOVE(&flow_lists->rss_list,
+				&rss_filter_ptr->base, entries);
 			rte_free(rss_filter_ptr);
 		}
 		break;
@@ -3538,14 +3521,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 	}
 
 free:
-	TAILQ_FOREACH(ixgbe_flow_mem_ptr, &ixgbe_flow_list, entries) {
-		if (ixgbe_flow_mem_ptr->flow == pmd_flow) {
-			TAILQ_REMOVE(&ixgbe_flow_list,
-				ixgbe_flow_mem_ptr, entries);
-			rte_free(ixgbe_flow_mem_ptr);
-			break;
-		}
-	}
+	TAILQ_REMOVE(&flow_lists->flow_list, flow_mem_base, entries);
+	rte_free(flow_mem_base);
 	rte_free(flow);
 
 	return ret;
@@ -3578,7 +3555,7 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
 
 	ixgbe_clear_rss_filter(dev);
 
-	ixgbe_filterlist_flush();
+	ixgbe_filterlist_flush(dev);
 
 	return 0;
 }
@@ -3639,22 +3616,7 @@ ixgbe_flow_rule_data(const struct rte_flow *flow)
 	if (flow->is_security || flow->rule == NULL)
 		return NULL;
 
-	switch (flow->filter_type) {
-	case RTE_ETH_FILTER_NTUPLE:
-		return RTE_PTR_ADD(flow->rule, sizeof(TAILQ_ENTRY(ixgbe_ntuple_filter_ele)));
-	case RTE_ETH_FILTER_ETHERTYPE:
-		return RTE_PTR_ADD(flow->rule, sizeof(TAILQ_ENTRY(ixgbe_ethertype_filter_ele)));
-	case RTE_ETH_FILTER_SYN:
-		return RTE_PTR_ADD(flow->rule, sizeof(TAILQ_ENTRY(ixgbe_eth_syn_filter_ele)));
-	case RTE_ETH_FILTER_FDIR:
-		return RTE_PTR_ADD(flow->rule, sizeof(TAILQ_ENTRY(ixgbe_fdir_rule_ele)));
-	case RTE_ETH_FILTER_L2_TUNNEL:
-		return RTE_PTR_ADD(flow->rule, sizeof(TAILQ_ENTRY(ixgbe_eth_l2_tunnel_conf_ele)));
-	case RTE_ETH_FILTER_HASH:
-		return RTE_PTR_ADD(flow->rule, sizeof(TAILQ_ENTRY(ixgbe_rss_conf_ele)));
-	default:
-		return NULL;
-	}
+	return RTE_PTR_ADD(flow->rule, sizeof(struct ixgbe_filter_ele_base));
 }
 
 static void
@@ -3684,15 +3646,16 @@ ixgbe_flow_dump_blob(FILE *file, const char *engine,
 }
 
 static int
-ixgbe_flow_dev_dump(struct rte_eth_dev *dev __rte_unused,
+ixgbe_flow_dev_dump(struct rte_eth_dev *dev,
 		    struct rte_flow *flow,
 		    FILE *file,
 		    struct rte_flow_error *error)
 {
-	struct ixgbe_flow_mem *flow_mem_base;
+	struct ixgbe_adapter *ad = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct ixgbe_filter_ele_base *flow_mem_base;
 	bool found = false;
 
-	TAILQ_FOREACH(flow_mem_base, &ixgbe_flow_list, entries) {
+	TAILQ_FOREACH(flow_mem_base, &ad->flow_lists.flow_list, entries) {
 		struct ixgbe_flow_mem *ixgbe_flow_mem_ptr =
 			(struct ixgbe_flow_mem *)flow_mem_base;
 		struct rte_flow *p_flow = ixgbe_flow_mem_ptr->flow;
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 02/15] net/ixgbe: fix shared PF pointer in representor
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 01/15] net/ixgbe: fix flows not being scoped to port Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 03/15] net/ixgbe: fix non-shared data in IPsec session Anatoly Burakov
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin, Declan Doherty, Ferruh Yigit,
	Mohammad Abdul Awal, Remy Horton

Currently, ixgbe representor private data stores a PF ethdev pointer.
That pointer is process local, but it is stored in shared memory, so a
secondary process can read an invalid pointer value.

Fix this by storing PF port id in representor private data and resolving
PF ethdev from rte_eth_devices[] in each process. Return -ENODEV when the
PF port is not valid.

This is not technically a bug in practice as using `rte_flow` from
secondary processes isn't supported, but we still shouldn't do that.

Fixes: cf80ba6e2038 ("net/ixgbe: add support for representor ports")
Cc: declan.doherty@intel.com
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_ethdev.c        |  2 +-
 drivers/net/intel/ixgbe/ixgbe_ethdev.h        |  2 +-
 .../net/intel/ixgbe/ixgbe_vf_representor.c    | 63 ++++++++++++++-----
 3 files changed, 50 insertions(+), 17 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 9454cbee0a..5c95507d60 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -1818,7 +1818,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 
 		representor.vf_id = eth_da.representor_ports[i];
 		representor.switch_domain_id = vfinfo->switch_domain_id;
-		representor.pf_ethdev = pf_ethdev;
+		representor.pf_port_id = pf_ethdev->data->port_id;
 
 		/* representor port net_bdf_port */
 		snprintf(name, sizeof(name), "net_%s_representor_%d",
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 38d476d309..1293ea49cb 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -518,7 +518,7 @@ struct ixgbe_adapter {
 struct ixgbe_vf_representor {
 	uint16_t vf_id;
 	uint16_t switch_domain_id;
-	struct rte_eth_dev *pf_ethdev;
+	uint16_t pf_port_id;
 };
 
 int ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params);
diff --git a/drivers/net/intel/ixgbe/ixgbe_vf_representor.c b/drivers/net/intel/ixgbe/ixgbe_vf_representor.c
index 901d80e406..52b43530c0 100644
--- a/drivers/net/intel/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/intel/ixgbe/ixgbe_vf_representor.c
@@ -13,14 +13,27 @@
 #include "ixgbe_rxtx.h"
 #include "rte_pmd_ixgbe.h"
 
+static struct rte_eth_dev *
+ixgbe_vf_representor_pf_get(const struct ixgbe_vf_representor *representor)
+{
+	if (!rte_eth_dev_is_valid_port(representor->pf_port_id))
+		return NULL;
+
+	return &rte_eth_devices[representor->pf_port_id];
+}
+
 
 static int
 ixgbe_vf_representor_link_update(struct rte_eth_dev *ethdev,
 	int wait_to_complete)
 {
 	struct ixgbe_vf_representor *representor = ethdev->data->dev_private;
+	struct rte_eth_dev *pf_ethdev = ixgbe_vf_representor_pf_get(representor);
 
-	return ixgbe_dev_link_update_share(representor->pf_ethdev,
+	if (pf_ethdev == NULL)
+		return -ENODEV;
+
+	return ixgbe_dev_link_update_share(pf_ethdev,
 		wait_to_complete, 0);
 }
 
@@ -29,9 +42,13 @@ ixgbe_vf_representor_mac_addr_set(struct rte_eth_dev *ethdev,
 	struct rte_ether_addr *mac_addr)
 {
 	struct ixgbe_vf_representor *representor = ethdev->data->dev_private;
+	struct rte_eth_dev *pf_ethdev = ixgbe_vf_representor_pf_get(representor);
+
+	if (pf_ethdev == NULL)
+		return -ENODEV;
 
 	return rte_pmd_ixgbe_set_vf_mac_addr(
-		representor->pf_ethdev->data->port_id,
+		pf_ethdev->data->port_id,
 		representor->vf_id, mac_addr);
 }
 
@@ -40,11 +57,14 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	struct rte_eth_dev_info *dev_info)
 {
 	struct ixgbe_vf_representor *representor = ethdev->data->dev_private;
+	struct rte_eth_dev *pf_ethdev = ixgbe_vf_representor_pf_get(representor);
 
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(
-		representor->pf_ethdev->data->dev_private);
+	if (pf_ethdev == NULL)
+		return -ENODEV;
 
-	dev_info->device = representor->pf_ethdev->device;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(pf_ethdev->data->dev_private);
+
+	dev_info->device = pf_ethdev->device;
 
 	dev_info->min_rx_bufsize = 1024;
 	/**< Minimum size of RX buffer. */
@@ -70,11 +90,11 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	/**< Device TX offload capabilities. */
 
 	dev_info->speed_capa =
-		representor->pf_ethdev->data->dev_link.link_speed;
+		pf_ethdev->data->dev_link.link_speed;
 	/**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 
 	dev_info->switch_info.name =
-		representor->pf_ethdev->device->name;
+		pf_ethdev->device->name;
 	dev_info->switch_info.domain_id = representor->switch_domain_id;
 	dev_info->switch_info.port_id = representor->vf_id;
 
@@ -123,10 +143,14 @@ ixgbe_vf_representor_vlan_filter_set(struct rte_eth_dev *ethdev,
 	uint16_t vlan_id, int on)
 {
 	struct ixgbe_vf_representor *representor = ethdev->data->dev_private;
+	struct rte_eth_dev *pf_ethdev = ixgbe_vf_representor_pf_get(representor);
 	uint64_t vf_mask = 1ULL << representor->vf_id;
 
+	if (pf_ethdev == NULL)
+		return -ENODEV;
+
 	return rte_pmd_ixgbe_set_vf_vlan_filter(
-		representor->pf_ethdev->data->port_id, vlan_id, vf_mask, on);
+		pf_ethdev->data->port_id, vlan_id, vf_mask, on);
 }
 
 static void
@@ -134,8 +158,12 @@ ixgbe_vf_representor_vlan_strip_queue_set(struct rte_eth_dev *ethdev,
 	__rte_unused uint16_t rx_queue_id, int on)
 {
 	struct ixgbe_vf_representor *representor = ethdev->data->dev_private;
+	struct rte_eth_dev *pf_ethdev = ixgbe_vf_representor_pf_get(representor);
 
-	rte_pmd_ixgbe_set_vf_vlan_stripq(representor->pf_ethdev->data->port_id,
+	if (pf_ethdev == NULL)
+		return;
+
+	rte_pmd_ixgbe_set_vf_vlan_stripq(pf_ethdev->data->port_id,
 		representor->vf_id, on);
 }
 
@@ -175,6 +203,7 @@ int
 ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 {
 	struct ixgbe_vf_representor *representor = ethdev->data->dev_private;
+	struct rte_eth_dev *pf_ethdev;
 
 	struct ixgbe_vf_info *vf_data;
 	struct rte_pci_device *pci_dev;
@@ -187,17 +216,21 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 		((struct ixgbe_vf_representor *)init_params)->vf_id;
 	representor->switch_domain_id =
 		((struct ixgbe_vf_representor *)init_params)->switch_domain_id;
-	representor->pf_ethdev =
-		((struct ixgbe_vf_representor *)init_params)->pf_ethdev;
+	representor->pf_port_id =
+		((struct ixgbe_vf_representor *)init_params)->pf_port_id;
 
-	pci_dev = RTE_ETH_DEV_TO_PCI(representor->pf_ethdev);
+	pf_ethdev = ixgbe_vf_representor_pf_get(representor);
+	if (pf_ethdev == NULL)
+		return -ENODEV;
+
+	pci_dev = RTE_ETH_DEV_TO_PCI(pf_ethdev);
 
 	if (representor->vf_id >= pci_dev->max_vfs)
 		return -ENODEV;
 
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	ethdev->data->representor_id = representor->vf_id;
-	ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
+	ethdev->data->backer_port_id = pf_ethdev->data->port_id;
 
 	/* Set representor device ops */
 	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
@@ -214,13 +247,13 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 
 	/* Reference VF mac address from PF data structure */
 	vf_data = *IXGBE_DEV_PRIVATE_TO_P_VFDATA(
-		representor->pf_ethdev->data->dev_private);
+		pf_ethdev->data->dev_private);
 
 	ethdev->data->mac_addrs = (struct rte_ether_addr *)
 		vf_data[representor->vf_id].vf_mac_addresses;
 
 	/* Link state. Inherited from PF */
-	link = &representor->pf_ethdev->data->dev_link;
+	link = &pf_ethdev->data->dev_link;
 
 	ethdev->data->dev_link.link_speed = link->link_speed;
 	ethdev->data->dev_link.link_duplex = link->link_duplex;
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 03/15] net/ixgbe: fix non-shared data in IPsec session
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 01/15] net/ixgbe: fix flows not being scoped to port Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 02/15] net/ixgbe: fix shared PF pointer in representor Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-05-07 10:50   ` Radu Nicolau
  2026-04-30 11:14 ` [PATCH v1 04/15] net/ixgbe: fix SCTP protocol-only flow parsing Anatoly Burakov
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin, Declan Doherty, Radu Nicolau

Currently, ixgbe IPsec session private data stores an ethdev pointer.
That pointer is process local, but the session private data is shared,
so a secondary process can read an invalid pointer value.

Fix this by storing ethdev data pointer in session private data instead,
and using it for session/device binding checks and dev_private lookups
when adding SAs.

Fixes: 9a0752f498d2 ("net/ixgbe: enable inline IPsec")
Cc: radu.nicolau@intel.com
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_ipsec.c | 10 +++++-----
 drivers/net/intel/ixgbe/ixgbe_ipsec.h |  3 ++-
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/intel/ixgbe/ixgbe_ipsec.c
index fe9a96c54d..88225bccc0 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ipsec.c
@@ -88,10 +88,10 @@ ixgbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev)
 static int
 ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
 {
-	struct rte_eth_dev *dev = ic_session->dev;
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_dev_data *dev_data = ic_session->dev_data;
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev_data->dev_private);
 	struct ixgbe_ipsec *priv = IXGBE_DEV_PRIVATE_TO_IPSEC(
-			dev->data->dev_private);
+			dev_data->dev_private);
 	uint32_t reg_val;
 	int sa_index = -1;
 
@@ -405,7 +405,7 @@ ixgbe_crypto_create_session(void *device,
 	memcpy(&ic_session->salt,
 	       &aead_xform->key.data[aead_xform->key.length], 4);
 	ic_session->spi = conf->ipsec.spi;
-	ic_session->dev = eth_dev;
+	ic_session->dev_data = eth_dev->data;
 
 	if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		if (ixgbe_crypto_add_sa(ic_session)) {
@@ -430,7 +430,7 @@ ixgbe_crypto_remove_session(void *device,
 	struct rte_eth_dev *eth_dev = device;
 	struct ixgbe_crypto_session *ic_session = SECURITY_GET_SESS_PRIV(session);
 
-	if (eth_dev != ic_session->dev) {
+	if (eth_dev->data != ic_session->dev_data) {
 		PMD_DRV_LOG(ERR, "Session not bound to this device");
 		return -ENODEV;
 	}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ipsec.h b/drivers/net/intel/ixgbe/ixgbe_ipsec.h
index e7c7186264..356817c61b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ipsec.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ipsec.h
@@ -5,6 +5,7 @@
 #ifndef IXGBE_IPSEC_H_
 #define IXGBE_IPSEC_H_
 
+#include <ethdev_driver.h>
 #include <rte_security.h>
 #include <rte_security_driver.h>
 
@@ -72,7 +73,7 @@ struct __rte_cache_aligned ixgbe_crypto_session {
 	uint32_t spi;
 	struct ipaddr src_ip;
 	struct ipaddr dst_ip;
-	struct rte_eth_dev *dev;
+	struct rte_eth_dev_data *dev_data;
 };
 
 struct ixgbe_crypto_rx_ip_table {
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 04/15] net/ixgbe: fix SCTP protocol-only flow parsing
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (2 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 03/15] net/ixgbe: fix non-shared data in IPsec session Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 05/15] net/ixgbe: fix L4 protocol mask handling Anatoly Burakov
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin, Qi Zhang

Currently, `ixgbe_parse_fdir_filter_normal()` checks the MAC type before
checking whether the `RTE_FLOW_ITEM_TYPE_SCTP` item carries a mask. On
`ixgbe_mac_X550`, `ixgbe_mac_X550EM_x`, `ixgbe_mac_X550EM_a` and
`ixgbe_mac_E610`, this makes `item->mask` mandatory and rejects
protocol-only SCTP patterns.

These devices can still match SCTP by protocol without L4 port masks.
Only SCTP port masking is MAC type dependent, but the current check order
makes the protocol-only path available only on older MAC types.

Fix this by checking whether a mask is present first. Keep the MAC type
check only for SCTP port masks, and accept protocol-only SCTP matching on
X550 and E610 as well.

Fixes: 86e19565f5e2 ("net/ixgbe: fix SCTP port support")
Cc: qi.z.zhang@intel.com
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_flow.c | 86 ++++++++++++++--------------
 1 file changed, 42 insertions(+), 44 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index c3abba4a90..71da5ac72e 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -2180,57 +2180,55 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 			return -rte_errno;
 		}
 
-		/* only some mac types support sctp port */
-		if (hw->mac.type == ixgbe_mac_X550 ||
-		    hw->mac.type == ixgbe_mac_X550EM_x ||
-		    hw->mac.type == ixgbe_mac_X550EM_a ||
-		    hw->mac.type == ixgbe_mac_E610) {
-			/**
-			 * Only care about src & dst ports,
-			 * others should be masked.
-			 */
-			if (!item->mask) {
-				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
-				rte_flow_error_set(error, EINVAL,
-					RTE_FLOW_ERROR_TYPE_ITEM,
-					item, "Not supported by fdir filter");
-				return -rte_errno;
-			}
-			rule->b_mask = TRUE;
-			sctp_mask = item->mask;
-			if (sctp_mask->hdr.tag ||
-				sctp_mask->hdr.cksum) {
-				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
-				rte_flow_error_set(error, EINVAL,
-					RTE_FLOW_ERROR_TYPE_ITEM,
-					item, "Not supported by fdir filter");
-				return -rte_errno;
-			}
-			rule->mask.src_port_mask = sctp_mask->hdr.src_port;
-			rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
+		sctp_mask = item->mask;
+		if (sctp_mask != NULL) {
+			/* only some mac types support sctp port masking */
+			if (hw->mac.type == ixgbe_mac_X550 ||
+			    hw->mac.type == ixgbe_mac_X550EM_x ||
+			    hw->mac.type == ixgbe_mac_X550EM_a ||
+			    hw->mac.type == ixgbe_mac_E610) {
+				/**
+				 * Only care about src & dst ports,
+				 * others should be masked.
+				 */
+				rule->b_mask = TRUE;
+				if (sctp_mask->hdr.tag ||
+				    sctp_mask->hdr.cksum) {
+					memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+					rte_flow_error_set(error, EINVAL,
+						RTE_FLOW_ERROR_TYPE_ITEM,
+						item, "Not supported by fdir filter");
+					return -rte_errno;
+				}
+				rule->mask.src_port_mask = sctp_mask->hdr.src_port;
+				rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
 
-			if (item->spec) {
-				rule->b_spec = TRUE;
-				sctp_spec = item->spec;
-				rule->ixgbe_fdir.formatted.src_port =
-					sctp_spec->hdr.src_port;
-				rule->ixgbe_fdir.formatted.dst_port =
-					sctp_spec->hdr.dst_port;
-			}
-		/* others even sctp port is not supported */
-		} else {
-			sctp_mask = item->mask;
-			if (sctp_mask &&
-				(sctp_mask->hdr.src_port ||
-				 sctp_mask->hdr.dst_port ||
-				 sctp_mask->hdr.tag ||
-				 sctp_mask->hdr.cksum)) {
+				if (item->spec) {
+					rule->b_spec = TRUE;
+					sctp_spec = item->spec;
+					rule->ixgbe_fdir.formatted.src_port =
+						sctp_spec->hdr.src_port;
+					rule->ixgbe_fdir.formatted.dst_port =
+						sctp_spec->hdr.dst_port;
+				}
+			/* others even sctp port masking is not supported */
+			} else if (sctp_mask->hdr.src_port ||
+				   sctp_mask->hdr.dst_port ||
+				   sctp_mask->hdr.tag ||
+				   sctp_mask->hdr.cksum) {
 				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
 				rte_flow_error_set(error, EINVAL,
 					RTE_FLOW_ERROR_TYPE_ITEM,
 					item, "Not supported by fdir filter");
 				return -rte_errno;
 			}
+		} else if (item->spec != NULL) {
+			/* No port mask means protocol-only match; spec is invalid. */
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
 		}
 
 		item = next_no_fuzzy_pattern(pattern, item);
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 05/15] net/ixgbe: fix L4 protocol mask handling
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (3 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 04/15] net/ixgbe: fix SCTP protocol-only flow parsing Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 06/15] net/ixgbe: reset flow state on clear paths Anatoly Burakov
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin, Beilei Xing, Wenzhuo Lu, Wei Dai,
	Wei Zhao

Currently, ixgbe uses zero `src_port_mask` and `dst_port_mask` to decide
whether `IXGBE_FDIRM_L4P` should ignore the L4 protocol. This conflates
two different cases: broad L4 matches that care about the L4 protocol but
ignore L4 ports, and plain IP matches that should ignore L4 protocol
entirely.

The current `rte_flow` path also copies parsed masks through `struct
rte_eth_fdir_masks`, which cannot preserve that distinction. As a
result, protocol-only `rte_flow` rules lose their L4 protocol match
information before the hardware FDIR mask is programmed.

Fix this by storing explicit L4 protocol match state in `struct
ixgbe_hw_fdir_mask` and by programming `IXGBE_FDIRM_L4P` from that state
instead of inferring it from port masks. Remove the lossy `struct
rte_eth_fdir_masks` mediation layer and program the hardware mask
directly from the ixgbe FDIR mask structure.

Fixes: 11777435c727 ("net/ixgbe: parse flow director filter")
Cc: wei.zhao1@intel.com
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_ethdev.h |  1 +
 drivers/net/intel/ixgbe/ixgbe_fdir.c   | 90 +++-----------------------
 drivers/net/intel/ixgbe/ixgbe_flow.c   | 11 ++--
 3 files changed, 15 insertions(+), 87 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 1293ea49cb..a0a2ea23b2 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -151,6 +151,7 @@ struct ixgbe_hw_fdir_mask {
 	uint32_t dst_ipv4_mask;
 	uint16_t src_ipv6_mask;
 	uint16_t dst_ipv6_mask;
+	uint8_t  l4_proto_match;
 	uint16_t src_port_mask;
 	uint16_t dst_port_mask;
 	uint16_t flex_bytes_mask;
diff --git a/drivers/net/intel/ixgbe/ixgbe_fdir.c b/drivers/net/intel/ixgbe/ixgbe_fdir.c
index 0bdfbd411a..38f589623e 100644
--- a/drivers/net/intel/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/intel/ixgbe/ixgbe_fdir.c
@@ -79,8 +79,6 @@
 #define IXGBE_FDIRIP6M_INNER_MAC_SHIFT 4
 
 static int fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash);
-static int fdir_set_input_mask(struct rte_eth_dev *dev,
-			       const struct rte_eth_fdir_masks *input_mask);
 static int fdir_set_input_mask_82599(struct rte_eth_dev *dev);
 static int fdir_set_input_mask_x550(struct rte_eth_dev *dev);
 static int ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev,
@@ -266,14 +264,8 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	/*
-	 * Program the relevant mask registers.  If src/dst_port or src/dst_addr
-	 * are zero, then assume a full mask for that field. Also assume that
-	 * a VLAN of 0 is unspecified, so mask that out as well.  L4type
-	 * cannot be masked out in this implementation.
-	 */
-	if (info->mask.dst_port_mask == 0 && info->mask.src_port_mask == 0)
-		/* use the L4 protocol mask for raw IPv4/IPv6 traffic */
+	if (!info->mask.l4_proto_match)
+		/* set L4P to ignore L4 protocol for IP traffic */
 		fdirm |= IXGBE_FDIRM_L4P;
 
 	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
@@ -306,7 +298,12 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev)
 	 */
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRTCPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRUDPM, ~fdirtcpm);
-	IXGBE_WRITE_REG(hw, IXGBE_FDIRSCTPM, ~fdirtcpm);
+	/* only certain MAC types have SCTP masking register */
+	if (hw->mac.type == ixgbe_mac_X550 ||
+			hw->mac.type == ixgbe_mac_X550EM_x ||
+			hw->mac.type == ixgbe_mac_X550EM_a ||
+			hw->mac.type == ixgbe_mac_E610)
+		IXGBE_WRITE_REG(hw, IXGBE_FDIRSCTPM, ~fdirtcpm);
 
 	/* Store source and destination IPv4 masks (big-endian),
 	 * can not use IXGBE_WRITE_REG.
@@ -425,62 +422,6 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 	return IXGBE_SUCCESS;
 }
 
-static int
-ixgbe_fdir_store_input_mask_82599(struct rte_eth_dev *dev,
-				  const struct rte_eth_fdir_masks *input_mask)
-{
-	struct ixgbe_hw_fdir_info *info =
-		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
-	uint16_t dst_ipv6m = 0;
-	uint16_t src_ipv6m = 0;
-
-	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
-	info->mask.src_port_mask = input_mask->src_port_mask;
-	info->mask.dst_port_mask = input_mask->dst_port_mask;
-	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
-	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
-	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
-	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
-	info->mask.src_ipv6_mask = src_ipv6m;
-	info->mask.dst_ipv6_mask = dst_ipv6m;
-
-	return IXGBE_SUCCESS;
-}
-
-static int
-ixgbe_fdir_store_input_mask_x550(struct rte_eth_dev *dev,
-				 const struct rte_eth_fdir_masks *input_mask)
-{
-	struct ixgbe_hw_fdir_info *info =
-		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
-
-	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
-	info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
-	info->mask.tunnel_type_mask = input_mask->tunnel_type_mask;
-	info->mask.tunnel_id_mask = input_mask->tunnel_id_mask;
-
-	return IXGBE_SUCCESS;
-}
-
-static int
-ixgbe_fdir_store_input_mask(struct rte_eth_dev *dev,
-			    const struct rte_eth_fdir_masks *input_mask)
-{
-	enum rte_fdir_mode mode = IXGBE_DEV_FDIR_CONF(dev)->mode;
-
-	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
-	    mode <= RTE_FDIR_MODE_PERFECT)
-		return ixgbe_fdir_store_input_mask_82599(dev, input_mask);
-	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
-		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
-		return ixgbe_fdir_store_input_mask_x550(dev, input_mask);
-
-	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
-	return -ENOTSUP;
-}
-
 int
 ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
 {
@@ -551,19 +492,6 @@ ixgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-fdir_set_input_mask(struct rte_eth_dev *dev,
-		    const struct rte_eth_fdir_masks *input_mask)
-{
-	int ret;
-
-	ret = ixgbe_fdir_store_input_mask(dev, input_mask);
-	if (ret)
-		return ret;
-
-	return ixgbe_fdir_set_input_mask(dev);
-}
-
 /*
  * ixgbe_check_fdir_flex_conf -check if the flex payload and mask configuration
  * arguments are valid
@@ -681,7 +609,7 @@ ixgbe_fdir_configure(struct rte_eth_dev *dev)
 	for (i = 1; i < 8; i++)
 		IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
 
-	err = fdir_set_input_mask(dev, &IXGBE_DEV_FDIR_CONF(dev)->mask);
+	err = ixgbe_fdir_set_input_mask(dev);
 	if (err < 0) {
 		PMD_INIT_LOG(ERR, " Error on setting FD mask");
 		return err;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 71da5ac72e..6e87d373ad 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -1713,6 +1713,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
 	rule->mask.vlan_tci_mask = 0;
 	rule->mask.flex_bytes_mask = 0;
+	rule->mask.l4_proto_match = 0;
 	rule->mask.dst_port_mask = 0;
 	rule->mask.src_port_mask = 0;
 
@@ -2325,6 +2326,10 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 		}
 	}
 
+	/* L4 protocol matching is enabled when parser selected an L4 type. */
+	rule->mask.l4_proto_match =
+		(rule->ixgbe_fdir.formatted.flow_type & IXGBE_ATR_L4TYPE_MASK) != 0;
+
 	return ixgbe_parse_fdir_act_attr(attr, actions, rule, error);
 }
 
@@ -2852,12 +2857,6 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev,
 
 step_next:
 
-	if (hw->mac.type == ixgbe_mac_82599EB &&
-		rule->fdirflags == IXGBE_FDIRCMD_DROP &&
-		(rule->ixgbe_fdir.formatted.src_port != 0 ||
-		rule->ixgbe_fdir.formatted.dst_port != 0))
-		return -ENOTSUP;
-
 	if (fdir_conf->mode == RTE_FDIR_MODE_NONE) {
 		fdir_conf->mode = rule->mode;
 		ret = ixgbe_fdir_configure(dev);
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 06/15] net/ixgbe: reset flow state on clear paths
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (4 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 05/15] net/ixgbe: fix L4 protocol mask handling Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 07/15] net/ixgbe: store max VFs in adapter Anatoly Burakov
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin, Wei Dai, Beilei Xing, Wenzhuo Lu,
	Wei Zhao

When all FDIR rules are removed (either through destroy or flush), the
driver does not fully reset its state. This can result in picking up stale
state in some circumstances.

Fix by clearing internal state when FDIR flow list is empty.

Fixes: 11777435c727 ("net/ixgbe: parse flow director filter")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_fdir.c | 7 +++++++
 drivers/net/intel/ixgbe/ixgbe_flow.c | 7 ++++++-
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_fdir.c b/drivers/net/intel/ixgbe/ixgbe_fdir.c
index 38f589623e..f51582a4bf 100644
--- a/drivers/net/intel/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/intel/ixgbe/ixgbe_fdir.c
@@ -1358,6 +1358,7 @@ ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
 int
 ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
 {
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	struct ixgbe_fdir_filter *fdir_filter;
@@ -1376,6 +1377,12 @@ ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
 		rte_free(fdir_filter);
 	}
 
+	/* reset internal FDIR state */
+	fdir_info->mask = (struct ixgbe_hw_fdir_mask){0};
+	fdir_info->flex_bytes_offset = 0;
+	fdir_info->mask_added = FALSE;
+	fdir_conf->mode = RTE_FDIR_MODE_NONE;
+
 	if (filter_flag != NULL)
 		ret = ixgbe_fdir_flush(dev);
 
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 6e87d373ad..b73037d4c3 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -3473,11 +3473,16 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 			sizeof(struct ixgbe_fdir_rule));
 		ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
 		if (!ret) {
+			struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 			TAILQ_REMOVE(&flow_lists->fdir_list,
 				&fdir_rule_ptr->base, entries);
 			rte_free(fdir_rule_ptr);
-			if (TAILQ_EMPTY(&flow_lists->fdir_list))
+			if (TAILQ_EMPTY(&flow_lists->fdir_list)) {
 				fdir_info->mask_added = false;
+				fdir_info->mask = (struct ixgbe_hw_fdir_mask){0};
+				fdir_info->flex_bytes_offset = 0;
+				fdir_conf->mode = RTE_FDIR_MODE_NONE;
+			}
 		}
 		break;
 	case RTE_ETH_FILTER_L2_TUNNEL:
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 07/15] net/ixgbe: store max VFs in adapter
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (5 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 06/15] net/ixgbe: reset flow state on clear paths Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 08/15] net/ixgbe: do not use flow list to count flows Anatoly Burakov
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin

Currently, `rte_flow`-related checks use `rte_eth_dev` to check for max
VFs. With coming rework of flows, the aim is to make the code as
multiprocess agnostic as possible, and `rte_eth_dev` pointer is
process-local.

To support this in VF-related checks, cache max_vfs in ixgbe adapter
during device init and read it in the rte_flow check paths to avoid direct
dependency on `rte_eth_dev`.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_ethdev.c | 3 ++-
 drivers/net/intel/ixgbe/ixgbe_ethdev.h | 1 +
 drivers/net/intel/ixgbe/ixgbe_flow.c   | 8 ++++----
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 5c95507d60..1c4a2e1177 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -1084,7 +1084,7 @@ ixgbe_parse_devargs(struct ixgbe_adapter *adapter,
 static int
 eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 {
-	struct ixgbe_adapter *ad = eth_dev->data->dev_private;
+	struct ixgbe_adapter *ad = IXGBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 	struct ixgbe_hw *hw =
@@ -1151,6 +1151,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	hw->vendor_id = pci_dev->id.vendor_id;
 	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
 	hw->allow_unsupported_sfp = 1;
+	ad->max_vfs = pci_dev->max_vfs;
 
 	/* Initialize the shared code (base driver) */
 #ifdef RTE_LIBRTE_IXGBE_BYPASS
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index a0a2ea23b2..2fb6d55387 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -506,6 +506,7 @@ struct ixgbe_adapter {
 
 	/* Used for limiting SDP3 TX_DISABLE checks */
 	uint8_t sdp3_no_tx_disable;
+	uint16_t max_vfs;
 
 	/* Used for VF link sync with PF's physical and logical (by checking
 	 * mailbox status) link status.
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index b73037d4c3..e641a9d405 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -1263,7 +1263,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 	const struct rte_flow_item_e_tag *e_tag_mask;
 	const struct rte_flow_action *act;
 	const struct rte_flow_action_vf *act_vf;
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct ixgbe_adapter *ad = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 
 	if (!pattern) {
 		rte_flow_error_set(error, EINVAL,
@@ -1395,7 +1395,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		act_vf = (const struct rte_flow_action_vf *)act->conf;
 		filter->pool = act_vf->id;
 	} else {
-		filter->pool = pci_dev->max_vfs;
+		filter->pool = ad->max_vfs;
 	}
 
 	/* check if the next not void item is END */
@@ -1421,7 +1421,7 @@ ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev,
 {
 	int ret = 0;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct ixgbe_adapter *ad = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	uint16_t vf_num;
 
 	ret = cons_parse_l2_tn_filter(dev, attr, pattern,
@@ -1438,7 +1438,7 @@ ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	vf_num = pci_dev->max_vfs;
+	vf_num = ad->max_vfs;
 
 	if (l2_tn_filter->pool > vf_num)
 		return -rte_errno;
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 08/15] net/ixgbe: do not use flow list to count flows
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (6 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 07/15] net/ixgbe: store max VFs in adapter Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-05-06  9:24   ` Bruce Richardson
  2026-04-30 11:14 ` [PATCH v1 09/15] net/ixgbe: remove redundant flow tracking lists Anatoly Burakov
                   ` (7 subsequent siblings)
  15 siblings, 1 reply; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin

Currently, FDIR code will use emptiness of its flow list as an indicator
that there are no flows (and that we can install a mask). That usage is the
only thing preventing us from getting rid of the FDIR flow list
altogether, so introduce a new mechanism for flow count tracking.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_ethdev.c | 1 +
 drivers/net/intel/ixgbe/ixgbe_ethdev.h | 1 +
 drivers/net/intel/ixgbe/ixgbe_fdir.c   | 8 +++++---
 drivers/net/intel/ixgbe/ixgbe_flow.c   | 3 ++-
 4 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 1c4a2e1177..ee1b499b49 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -1465,6 +1465,7 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 		rte_hash_free(fdir_info->hash_handle);
 		return -ENOMEM;
 	}
+	fdir_info->n_flows = 0;
 	fdir_info->mask_added = FALSE;
 
 	return 0;
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 2fb6d55387..6147cd6bdf 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -199,6 +199,7 @@ struct ixgbe_hw_fdir_info {
 	struct ixgbe_fdir_filter **hash_map;
 	struct rte_hash *hash_handle; /* cuckoo hash handler */
 	bool mask_added; /* If already got mask from consistent filter */
+	uint32_t    n_flows;
 };
 
 struct ixgbe_rte_flow_rss_conf {
diff --git a/drivers/net/intel/ixgbe/ixgbe_fdir.c b/drivers/net/intel/ixgbe/ixgbe_fdir.c
index f51582a4bf..5f7159abf2 100644
--- a/drivers/net/intel/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/intel/ixgbe/ixgbe_fdir.c
@@ -1362,20 +1362,22 @@ ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	struct ixgbe_fdir_filter *fdir_filter;
-	struct ixgbe_fdir_filter *filter_flag;
+	bool had_flows;
 	int ret = 0;
 
+	had_flows = (fdir_info->n_flows != 0);
+
 	/* flush flow director */
 	rte_hash_reset(fdir_info->hash_handle);
 	memset(fdir_info->hash_map, 0,
 	       sizeof(struct ixgbe_fdir_filter *) * IXGBE_MAX_FDIR_FILTER_NUM);
-	filter_flag = TAILQ_FIRST(&fdir_info->fdir_list);
 	while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
 		TAILQ_REMOVE(&fdir_info->fdir_list,
 			     fdir_filter,
 			     entries);
 		rte_free(fdir_filter);
 	}
+	fdir_info->n_flows = 0;
 
 	/* reset internal FDIR state */
 	fdir_info->mask = (struct ixgbe_hw_fdir_mask){0};
@@ -1383,7 +1385,7 @@ ixgbe_clear_all_fdir_filter(struct rte_eth_dev *dev)
 	fdir_info->mask_added = FALSE;
 	fdir_conf->mode = RTE_FDIR_MODE_NONE;
 
-	if (filter_flag != NULL)
+	if (had_flows)
 		ret = ixgbe_fdir_flush(dev);
 
 	return ret;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index e641a9d405..b68934e911 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -3241,6 +3241,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 					&fdir_rule_ptr->base, entries);
 				flow->rule = fdir_rule_ptr;
 				flow->filter_type = RTE_ETH_FILTER_FDIR;
+				fdir_info->n_flows++;
 
 				return flow;
 			}
@@ -3477,7 +3478,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 			TAILQ_REMOVE(&flow_lists->fdir_list,
 				&fdir_rule_ptr->base, entries);
 			rte_free(fdir_rule_ptr);
-			if (TAILQ_EMPTY(&flow_lists->fdir_list)) {
+			if (fdir_info->n_flows > 0 && --(fdir_info->n_flows) == 0) {
 				fdir_info->mask_added = false;
 				fdir_info->mask = (struct ixgbe_hw_fdir_mask){0};
 				fdir_info->flex_bytes_offset = 0;
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 09/15] net/ixgbe: remove redundant flow tracking lists
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (7 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 08/15] net/ixgbe: do not use flow list to count flows Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 10/15] net/ixgbe: reduce FDIR conf macro usage Anatoly Burakov
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin

Keep flow rule ownership in `ixgbe_flow_list` and drop duplicate per-type
tracking lists in the flow path.

This removes redundant TAILQ bookkeeping for ntuple, ethertype, syn, FDIR,
l2 tunnel, and RSS rules while preserving behavior.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_ethdev.h | 15 +---
 drivers/net/intel/ixgbe/ixgbe_flow.c   | 96 ++++++--------------------
 2 files changed, 21 insertions(+), 90 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 6147cd6bdf..f49a179082 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -460,16 +460,6 @@ struct ixgbe_tm_conf {
 struct ixgbe_filter_ele_base;
 TAILQ_HEAD(ixgbe_filter_ele_list, ixgbe_filter_ele_base);
 
-struct ixgbe_flow_lists {
-	struct ixgbe_filter_ele_list ntuple_list;
-	struct ixgbe_filter_ele_list ethertype_list;
-	struct ixgbe_filter_ele_list syn_list;
-	struct ixgbe_filter_ele_list fdir_list;
-	struct ixgbe_filter_ele_list l2_tunnel_list;
-	struct ixgbe_filter_ele_list rss_list;
-	struct ixgbe_filter_ele_list flow_list;
-};
-
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -491,7 +481,7 @@ struct ixgbe_adapter {
 	struct ixgbe_bypass_info    bps;
 #endif /* RTE_LIBRTE_IXGBE_BYPASS */
 	struct ixgbe_filter_info    filter;
-	struct ixgbe_flow_lists     flow_lists;
+	struct ixgbe_filter_ele_list flow_list;
 	struct ixgbe_l2_tn_info     l2_tn;
 	struct ixgbe_bw_conf        bw_conf;
 	struct ixgbe_ipsec          ipsec;
@@ -577,9 +567,6 @@ uint16_t ixgbe_vf_representor_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts
 #define IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter) \
 	(&((struct ixgbe_adapter *)adapter)->filter)
 
-#define IXGBE_DEV_PRIVATE_TO_FLOW_LISTS(adapter) \
-	(&((struct ixgbe_adapter *)adapter)->flow_lists)
-
 #define IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter) \
 	(&((struct ixgbe_adapter *)adapter)->l2_tn)
 
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index b68934e911..128efd3f86 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -3002,49 +3002,25 @@ ixgbe_clear_rss_filter(struct rte_eth_dev *dev)
 void
 ixgbe_filterlist_init(struct rte_eth_dev *dev)
 {
-	struct ixgbe_flow_lists *flow_lists =
-		IXGBE_DEV_PRIVATE_TO_FLOW_LISTS(dev->data->dev_private);
+	struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 
-	TAILQ_INIT(&flow_lists->ntuple_list);
-	TAILQ_INIT(&flow_lists->ethertype_list);
-	TAILQ_INIT(&flow_lists->syn_list);
-	TAILQ_INIT(&flow_lists->fdir_list);
-	TAILQ_INIT(&flow_lists->l2_tunnel_list);
-	TAILQ_INIT(&flow_lists->rss_list);
-	TAILQ_INIT(&flow_lists->flow_list);
-}
-
-static void
-ixgbe_filter_flush(struct ixgbe_filter_ele_list *list)
-{
-	struct ixgbe_filter_ele_base *ele, *tmp;
-
-	RTE_TAILQ_FOREACH_SAFE(ele, list, entries, tmp) {
-		TAILQ_REMOVE(list, ele, entries);
-		rte_free(ele);
-	}
+	TAILQ_INIT(&adapter->flow_list);
 }
 
 void
 ixgbe_filterlist_flush(struct rte_eth_dev *dev)
 {
-	struct ixgbe_flow_lists *flow_lists =
-		IXGBE_DEV_PRIVATE_TO_FLOW_LISTS(dev->data->dev_private);
+	struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct ixgbe_filter_ele_base *ele, *tmp;
 
-	ixgbe_filter_flush(&flow_lists->ntuple_list);
-	ixgbe_filter_flush(&flow_lists->ethertype_list);
-	ixgbe_filter_flush(&flow_lists->syn_list);
-	ixgbe_filter_flush(&flow_lists->l2_tunnel_list);
-	ixgbe_filter_flush(&flow_lists->fdir_list);
-	ixgbe_filter_flush(&flow_lists->rss_list);
-
-	RTE_TAILQ_FOREACH_SAFE(ele, &flow_lists->flow_list, entries, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(ele, &adapter->flow_list, entries, tmp) {
 		struct ixgbe_flow_mem *ixgbe_flow_mem_ptr =
 			(struct ixgbe_flow_mem *)ele;
+		struct rte_flow *flow = ixgbe_flow_mem_ptr->flow;
 
-		TAILQ_REMOVE(&flow_lists->flow_list, ele, entries);
-		rte_free(ixgbe_flow_mem_ptr->flow);
+		TAILQ_REMOVE(&adapter->flow_list, ele, entries);
+		rte_free(flow->rule);
+		rte_free(flow);
 		rte_free(ele);
 	}
 }
@@ -3079,8 +3055,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
 	struct ixgbe_rss_conf_ele *rss_filter_ptr;
 	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
-	struct ixgbe_flow_lists *flow_lists =
-		IXGBE_DEV_PRIVATE_TO_FLOW_LISTS(dev->data->dev_private);
+	struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	uint8_t first_mask = FALSE;
 
 	flow = rte_zmalloc("ixgbe_rte_flow", sizeof(struct rte_flow), 0);
@@ -3096,7 +3071,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 		return NULL;
 	}
 	ixgbe_flow_mem_ptr->flow = flow;
-	TAILQ_INSERT_TAIL(&flow_lists->flow_list,
+	TAILQ_INSERT_TAIL(&adapter->flow_list,
 				&ixgbe_flow_mem_ptr->base, entries);
 
 	/**
@@ -3124,8 +3099,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			rte_memcpy(&ntuple_filter_ptr->filter_info,
 				&ntuple_filter,
 				sizeof(struct rte_eth_ntuple_filter));
-			TAILQ_INSERT_TAIL(&flow_lists->ntuple_list,
-				&ntuple_filter_ptr->base, entries);
 			flow->rule = ntuple_filter_ptr;
 			flow->filter_type = RTE_ETH_FILTER_NTUPLE;
 			return flow;
@@ -3150,8 +3123,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			rte_memcpy(&ethertype_filter_ptr->filter_info,
 				&ethertype_filter,
 				sizeof(struct rte_eth_ethertype_filter));
-			TAILQ_INSERT_TAIL(&flow_lists->ethertype_list,
-				&ethertype_filter_ptr->base, entries);
 			flow->rule = ethertype_filter_ptr;
 			flow->filter_type = RTE_ETH_FILTER_ETHERTYPE;
 			return flow;
@@ -3174,9 +3145,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			rte_memcpy(&syn_filter_ptr->filter_info,
 				&syn_filter,
 				sizeof(struct rte_eth_syn_filter));
-			TAILQ_INSERT_TAIL(&flow_lists->syn_list,
-				&syn_filter_ptr->base,
-				entries);
 			flow->rule = syn_filter_ptr;
 			flow->filter_type = RTE_ETH_FILTER_SYN;
 			return flow;
@@ -3237,8 +3205,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 				rte_memcpy(&fdir_rule_ptr->filter_info,
 					&fdir_rule,
 					sizeof(struct ixgbe_fdir_rule));
-				TAILQ_INSERT_TAIL(&flow_lists->fdir_list,
-					&fdir_rule_ptr->base, entries);
 				flow->rule = fdir_rule_ptr;
 				flow->filter_type = RTE_ETH_FILTER_FDIR;
 				fdir_info->n_flows++;
@@ -3275,8 +3241,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			rte_memcpy(&l2_tn_filter_ptr->filter_info,
 				&l2_tn_filter,
 				sizeof(struct ixgbe_l2_tunnel_conf));
-			TAILQ_INSERT_TAIL(&flow_lists->l2_tunnel_list,
-				&l2_tn_filter_ptr->base, entries);
 			flow->rule = l2_tn_filter_ptr;
 			flow->filter_type = RTE_ETH_FILTER_L2_TUNNEL;
 			return flow;
@@ -3297,8 +3261,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			}
 			ixgbe_rss_conf_init(&rss_filter_ptr->filter_info,
 					    &rss_conf.conf);
-			TAILQ_INSERT_TAIL(&flow_lists->rss_list,
-				&rss_filter_ptr->base, entries);
 			flow->rule = rss_filter_ptr;
 			flow->filter_type = RTE_ETH_FILTER_HASH;
 			return flow;
@@ -3306,7 +3268,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	}
 
 out:
-	TAILQ_REMOVE(&flow_lists->flow_list,
+	TAILQ_REMOVE(&adapter->flow_list,
 		&ixgbe_flow_mem_ptr->base, entries);
 	rte_flow_error_set(error, -ret,
 			   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
@@ -3400,14 +3362,13 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
 	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
 	struct ixgbe_filter_ele_base *flow_mem_base;
+	struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
-	struct ixgbe_flow_lists *flow_lists =
-		IXGBE_DEV_PRIVATE_TO_FLOW_LISTS(dev->data->dev_private);
 	struct ixgbe_rss_conf_ele *rss_filter_ptr;
 
 	/* Validate ownership before touching HW/SW state. */
-	TAILQ_FOREACH(flow_mem_base, &flow_lists->flow_list, entries) {
+	TAILQ_FOREACH(flow_mem_base, &adapter->flow_list, entries) {
 		struct ixgbe_flow_mem *ixgbe_flow_mem_ptr =
 			(struct ixgbe_flow_mem *)flow_mem_base;
 
@@ -3434,11 +3395,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 			&ntuple_filter_ptr->filter_info,
 			sizeof(struct rte_eth_ntuple_filter));
 		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
-		if (!ret) {
-			TAILQ_REMOVE(&flow_lists->ntuple_list,
-				&ntuple_filter_ptr->base, entries);
+		if (!ret)
 			rte_free(ntuple_filter_ptr);
-		}
 		break;
 	case RTE_ETH_FILTER_ETHERTYPE:
 		ethertype_filter_ptr = (struct ixgbe_ethertype_filter_ele *)
@@ -3448,11 +3406,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 			sizeof(struct rte_eth_ethertype_filter));
 		ret = ixgbe_add_del_ethertype_filter(dev,
 				&ethertype_filter, FALSE);
-		if (!ret) {
-			TAILQ_REMOVE(&flow_lists->ethertype_list,
-				&ethertype_filter_ptr->base, entries);
+		if (!ret)
 			rte_free(ethertype_filter_ptr);
-		}
 		break;
 	case RTE_ETH_FILTER_SYN:
 		syn_filter_ptr = (struct ixgbe_eth_syn_filter_ele *)
@@ -3461,11 +3416,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 			&syn_filter_ptr->filter_info,
 			sizeof(struct rte_eth_syn_filter));
 		ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE);
-		if (!ret) {
-			TAILQ_REMOVE(&flow_lists->syn_list,
-				&syn_filter_ptr->base, entries);
+		if (!ret)
 			rte_free(syn_filter_ptr);
-		}
 		break;
 	case RTE_ETH_FILTER_FDIR:
 		fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
@@ -3475,8 +3427,6 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
 		if (!ret) {
 			struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
-			TAILQ_REMOVE(&flow_lists->fdir_list,
-				&fdir_rule_ptr->base, entries);
 			rte_free(fdir_rule_ptr);
 			if (fdir_info->n_flows > 0 && --(fdir_info->n_flows) == 0) {
 				fdir_info->mask_added = false;
@@ -3492,22 +3442,16 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
 			sizeof(struct ixgbe_l2_tunnel_conf));
 		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
-		if (!ret) {
-			TAILQ_REMOVE(&flow_lists->l2_tunnel_list,
-				&l2_tn_filter_ptr->base, entries);
+		if (!ret)
 			rte_free(l2_tn_filter_ptr);
-		}
 		break;
 	case RTE_ETH_FILTER_HASH:
 		rss_filter_ptr = (struct ixgbe_rss_conf_ele *)
 				pmd_flow->rule;
 		ret = ixgbe_config_rss_filter(dev,
 					&rss_filter_ptr->filter_info, FALSE);
-		if (!ret) {
-			TAILQ_REMOVE(&flow_lists->rss_list,
-				&rss_filter_ptr->base, entries);
+		if (!ret)
 			rte_free(rss_filter_ptr);
-		}
 		break;
 	default:
 		PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
@@ -3524,7 +3468,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 	}
 
 free:
-	TAILQ_REMOVE(&flow_lists->flow_list, flow_mem_base, entries);
+	TAILQ_REMOVE(&adapter->flow_list, flow_mem_base, entries);
 	rte_free(flow_mem_base);
 	rte_free(flow);
 
@@ -3658,7 +3602,7 @@ ixgbe_flow_dev_dump(struct rte_eth_dev *dev,
 	struct ixgbe_filter_ele_base *flow_mem_base;
 	bool found = false;
 
-	TAILQ_FOREACH(flow_mem_base, &ad->flow_lists.flow_list, entries) {
+	TAILQ_FOREACH(flow_mem_base, &ad->flow_list, entries) {
 		struct ixgbe_flow_mem *ixgbe_flow_mem_ptr =
 			(struct ixgbe_flow_mem *)flow_mem_base;
 		struct rte_flow *p_flow = ixgbe_flow_mem_ptr->flow;
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 10/15] net/ixgbe: reduce FDIR conf macro usage
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (8 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 09/15] net/ixgbe: remove redundant flow tracking lists Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 11/15] net/ixgbe: use adapter in flow-related calls Anatoly Burakov
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin

Currently, there are quite a few places where FDIR_CONF macro is used
repeatedly within the same function. Change these instances to only get the
fdir conf pointer once, and use the pointer instead.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_ethdev.c |  3 ++-
 drivers/net/intel/ixgbe/ixgbe_fdir.c   | 34 ++++++++++++++++----------
 2 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index ee1b499b49..dc3aa49ec4 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -2614,6 +2614,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw =
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	struct ixgbe_vf_info *vfinfo =
 		*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -2718,7 +2719,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	/* Configure DCB hw */
 	ixgbe_configure_dcb(dev);
 
-	if (IXGBE_DEV_FDIR_CONF(dev)->mode != RTE_FDIR_MODE_NONE) {
+	if (fdir_conf->mode != RTE_FDIR_MODE_NONE) {
 		err = ixgbe_fdir_configure(dev);
 		if (err)
 			goto error;
diff --git a/drivers/net/intel/ixgbe/ixgbe_fdir.c b/drivers/net/intel/ixgbe/ixgbe_fdir.c
index 5f7159abf2..51557cf68d 100644
--- a/drivers/net/intel/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/intel/ixgbe/ixgbe_fdir.c
@@ -251,6 +251,7 @@ static int
 fdir_set_input_mask_82599(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	struct ixgbe_hw_fdir_info *info =
 			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	/*
@@ -313,7 +314,7 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev)
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRDIP4M);
 	*reg = ~(info->mask.dst_ipv4_mask);
 
-	if (IXGBE_DEV_FDIR_CONF(dev)->mode == RTE_FDIR_MODE_SIGNATURE) {
+	if (fdir_conf->mode == RTE_FDIR_MODE_SIGNATURE) {
 		/*
 		 * Store source and destination IPv6 masks (bit reversed)
 		 */
@@ -334,6 +335,7 @@ static int
 fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	struct ixgbe_hw_fdir_info *info =
 			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	/* mask VM pool and DIPv6 since there are currently not supported
@@ -342,7 +344,7 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 	uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 |
 			 IXGBE_FDIRM_FLEX;
 	uint32_t fdiripv6m;
-	enum rte_fdir_mode mode = IXGBE_DEV_FDIR_CONF(dev)->mode;
+	enum rte_fdir_mode mode = fdir_conf->mode;
 	uint16_t mac_mask;
 
 	PMD_INIT_FUNC_TRACE();
@@ -425,7 +427,8 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 int
 ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
 {
-	enum rte_fdir_mode mode = IXGBE_DEV_FDIR_CONF(dev)->mode;
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+	enum rte_fdir_mode mode = fdir_conf->mode;
 
 	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
 	    mode <= RTE_FDIR_MODE_PERFECT)
@@ -563,10 +566,11 @@ int
 ixgbe_fdir_configure(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	int err;
 	uint32_t fdirctrl, pbsize;
 	int i;
-	enum rte_fdir_mode mode = IXGBE_DEV_FDIR_CONF(dev)->mode;
+	enum rte_fdir_mode mode = fdir_conf->mode;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -587,7 +591,7 @@ ixgbe_fdir_configure(struct rte_eth_dev *dev)
 	    mode != RTE_FDIR_MODE_PERFECT)
 		return -ENOSYS;
 
-	err = configure_fdir_flags(IXGBE_DEV_FDIR_CONF(dev), &fdirctrl);
+	err = configure_fdir_flags(fdir_conf, &fdirctrl);
 	if (err)
 		return err;
 
@@ -614,7 +618,7 @@ ixgbe_fdir_configure(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(ERR, " Error on setting FD mask");
 		return err;
 	}
-	err = ixgbe_set_fdir_flex_conf(dev, &IXGBE_DEV_FDIR_CONF(dev)->flex_conf,
+	err = ixgbe_set_fdir_flex_conf(dev, &fdir_conf->flex_conf,
 				       &fdirctrl);
 	if (err < 0) {
 		PMD_INIT_LOG(ERR, " Error on setting FD flexible arguments.");
@@ -1043,6 +1047,7 @@ ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
 			  bool update)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	uint32_t fdircmd_flags;
 	uint32_t fdirhash;
 	uint8_t queue;
@@ -1050,7 +1055,7 @@ ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
 	int err;
 	struct ixgbe_hw_fdir_info *info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
-	enum rte_fdir_mode fdir_mode = IXGBE_DEV_FDIR_CONF(dev)->mode;
+	enum rte_fdir_mode fdir_mode = fdir_conf->mode;
 	struct ixgbe_fdir_filter *node;
 	bool add_node = FALSE;
 
@@ -1095,12 +1100,12 @@ ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
 			return -ENOTSUP;
 		}
 		fdirhash = atr_compute_perfect_hash_82599(&rule->ixgbe_fdir,
-							  IXGBE_DEV_FDIR_CONF(dev)->pballoc);
+				fdir_conf->pballoc);
 		fdirhash |= rule->soft_id <<
 			IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT;
 	} else
 		fdirhash = atr_compute_sig_hash_82599(&rule->ixgbe_fdir,
-						      IXGBE_DEV_FDIR_CONF(dev)->pballoc);
+				fdir_conf->pballoc);
 
 	if (del) {
 		err = ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
@@ -1118,7 +1123,7 @@ ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
 	fdircmd_flags = (update) ? IXGBE_FDIRCMD_FILTER_UPDATE : 0;
 	if (rule->fdirflags & IXGBE_FDIRCMD_DROP) {
 		if (is_perfect) {
-			queue = IXGBE_DEV_FDIR_CONF(dev)->drop_queue;
+			queue = fdir_conf->drop_queue;
 			fdircmd_flags |= IXGBE_FDIRCMD_DROP;
 		} else {
 			PMD_DRV_LOG(ERR, "Drop option is not supported in"
@@ -1209,6 +1214,7 @@ void
 ixgbe_fdir_info_get(struct rte_eth_dev *dev, struct rte_eth_fdir_info *fdir_info)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	struct ixgbe_hw_fdir_info *info =
 			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	uint32_t fdirctrl, max_num;
@@ -1218,7 +1224,7 @@ ixgbe_fdir_info_get(struct rte_eth_dev *dev, struct rte_eth_fdir_info *fdir_info
 	offset = ((fdirctrl & IXGBE_FDIRCTRL_FLEX_MASK) >>
 			IXGBE_FDIRCTRL_FLEX_SHIFT) * sizeof(uint16_t);
 
-	fdir_info->mode = IXGBE_DEV_FDIR_CONF(dev)->mode;
+	fdir_info->mode = fdir_conf->mode;
 	max_num = (1 << (FDIRENTRIES_NUM_SHIFT +
 			(fdirctrl & FDIRCTRL_PBALLOC_MASK)));
 	if (fdir_info->mode >= RTE_FDIR_MODE_PERFECT &&
@@ -1268,10 +1274,11 @@ void
 ixgbe_fdir_stats_get(struct rte_eth_dev *dev, struct rte_eth_fdir_stats *fdir_stats)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	struct ixgbe_hw_fdir_info *info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	uint32_t reg, max_num;
-	enum rte_fdir_mode fdir_mode = IXGBE_DEV_FDIR_CONF(dev)->mode;
+	enum rte_fdir_mode fdir_mode = fdir_conf->mode;
 
 	/* Get the information from registers */
 	reg = IXGBE_READ_REG(hw, IXGBE_FDIRFREE);
@@ -1324,11 +1331,12 @@ void
 ixgbe_fdir_filter_restore(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
 	struct ixgbe_fdir_filter *node;
 	bool is_perfect = FALSE;
-	enum rte_fdir_mode fdir_mode = IXGBE_DEV_FDIR_CONF(dev)->mode;
+	enum rte_fdir_mode fdir_mode = fdir_conf->mode;
 
 	if (fdir_mode >= RTE_FDIR_MODE_PERFECT &&
 	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 11/15] net/ixgbe: use adapter in flow-related calls
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (9 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 10/15] net/ixgbe: reduce FDIR conf macro usage Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 12/15] net/ixgbe: support protocol-only TCP and UDP rules Anatoly Burakov
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin

Currently, a lot of `rte_flow`-related code paths depend on using the
`dev` pointer. This has been okay up until now, because all the
infrastructure surrounding `rte_flow` has been ad-hoc and did not have
any persistent driver identification mechanism that works across multiple
drivers, so any API call was tied to immediate `rte_eth_dev` API
invocation.

However, with coming shared infrastructure, we can no longer rely on things
that are process-local (such as `dev` pointer), and because most calls can
be implemented using `adapter` anyway, we'll just switch the flow-related
internal calls to use `adapter` instead of `dev`.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_ethdev.c | 94 +++++++++++++++-----------
 drivers/net/intel/ixgbe/ixgbe_ethdev.h | 23 ++++---
 drivers/net/intel/ixgbe/ixgbe_fdir.c   | 65 ++++++++++--------
 drivers/net/intel/ixgbe/ixgbe_flow.c   | 45 ++++++------
 drivers/net/intel/ixgbe/ixgbe_rxtx.c   | 10 +--
 5 files changed, 131 insertions(+), 106 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index dc3aa49ec4..80d70fe083 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -302,9 +302,9 @@ static int ixgbevf_add_mac_addr(struct rte_eth_dev *dev,
 static void ixgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index);
 static int ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev,
 					     struct rte_ether_addr *mac_addr);
-static int ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
+static int ixgbe_add_5tuple_filter(struct ixgbe_adapter *adapter,
 			struct ixgbe_5tuple_filter *filter);
-static void ixgbe_remove_5tuple_filter(struct rte_eth_dev *dev,
+static void ixgbe_remove_5tuple_filter(struct ixgbe_adapter *adapter,
 			struct ixgbe_5tuple_filter *filter);
 static int ixgbe_dev_flow_ops_get(struct rte_eth_dev *dev,
 				  const struct rte_flow_ops **ops);
@@ -2612,8 +2612,9 @@ ixgbe_flow_ctrl_enable(struct rte_eth_dev *dev, struct ixgbe_hw *hw)
 static int
 ixgbe_dev_start(struct rte_eth_dev *dev)
 {
-	struct ixgbe_hw *hw =
-		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_adapter *adapter =
+		IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
 	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	struct ixgbe_vf_info *vfinfo =
 		*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
@@ -2720,7 +2721,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	ixgbe_configure_dcb(dev);
 
 	if (fdir_conf->mode != RTE_FDIR_MODE_NONE) {
-		err = ixgbe_fdir_configure(dev);
+		err = ixgbe_fdir_configure(adapter);
 		if (err)
 			goto error;
 	}
@@ -6446,13 +6447,13 @@ ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev,
 }
 
 int
-ixgbe_syn_filter_set(struct rte_eth_dev *dev,
+ixgbe_syn_filter_set(struct ixgbe_adapter *adapter,
 			struct rte_eth_syn_filter *filter,
 			bool add)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
 	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter);
 	uint32_t syn_info;
 	uint32_t synqf;
 
@@ -6500,10 +6501,10 @@ convert_protocol_type(uint8_t protocol_value)
 
 /* inject a 5-tuple filter to HW */
 static inline void
-ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev,
+ixgbe_inject_5tuple_filter(struct ixgbe_adapter *adapter,
 			   struct ixgbe_5tuple_filter *filter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
 	int i;
 	uint32_t ftqf, sdpqf;
 	uint32_t l34timir = 0;
@@ -6558,11 +6559,11 @@ ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev,
  *    - On failure, a negative value.
  */
 static int
-ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
+ixgbe_add_5tuple_filter(struct ixgbe_adapter *adapter,
 			struct ixgbe_5tuple_filter *filter)
 {
 	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter);
 	int i, idx, shift;
 
 	/*
@@ -6586,7 +6587,7 @@ ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
 		return -ENOSYS;
 	}
 
-	ixgbe_inject_5tuple_filter(dev, filter);
+	ixgbe_inject_5tuple_filter(adapter, filter);
 
 	return 0;
 }
@@ -6599,12 +6600,12 @@ ixgbe_add_5tuple_filter(struct rte_eth_dev *dev,
  * filter: the pointer of the filter will be removed.
  */
 static void
-ixgbe_remove_5tuple_filter(struct rte_eth_dev *dev,
+ixgbe_remove_5tuple_filter(struct ixgbe_adapter *adapter,
 			struct ixgbe_5tuple_filter *filter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
 	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter);
 	uint16_t index = filter->index;
 
 	filter_info->fivetuple_mask[index / (sizeof(uint32_t) * NBBY)] &=
@@ -6772,12 +6773,12 @@ ntuple_filter_to_5tuple(struct rte_eth_ntuple_filter *filter,
  *    - On failure, a negative value.
  */
 int
-ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
+ixgbe_add_del_ntuple_filter(struct ixgbe_adapter *adapter,
 			struct rte_eth_ntuple_filter *ntuple_filter,
 			bool add)
 {
 	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter);
 	struct ixgbe_5tuple_filter_info filter_5tuple;
 	struct ixgbe_5tuple_filter *filter;
 	int ret;
@@ -6812,25 +6813,25 @@ ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
 				 &filter_5tuple,
 				 sizeof(struct ixgbe_5tuple_filter_info));
 		filter->queue = ntuple_filter->queue;
-		ret = ixgbe_add_5tuple_filter(dev, filter);
+		ret = ixgbe_add_5tuple_filter(adapter, filter);
 		if (ret < 0) {
 			rte_free(filter);
 			return ret;
 		}
 	} else
-		ixgbe_remove_5tuple_filter(dev, filter);
+		ixgbe_remove_5tuple_filter(adapter, filter);
 
 	return 0;
 }
 
 int
-ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
+ixgbe_add_del_ethertype_filter(struct ixgbe_adapter *adapter,
 			struct rte_eth_ethertype_filter *filter,
 			bool add)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
 	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter);
 	uint32_t etqf = 0;
 	uint32_t etqs = 0;
 	int ret;
@@ -7700,11 +7701,11 @@ ixgbe_e_tag_enable(struct ixgbe_hw *hw)
 }
 
 static int
-ixgbe_e_tag_filter_del(struct rte_eth_dev *dev,
+ixgbe_e_tag_filter_del(struct ixgbe_adapter *adapter,
 		       struct ixgbe_l2_tunnel_conf *l2_tunnel)
 {
 	int ret = 0;
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
 	uint32_t i, rar_entries;
 	uint32_t rar_low, rar_high;
 
@@ -7737,11 +7738,11 @@ ixgbe_e_tag_filter_del(struct rte_eth_dev *dev,
 }
 
 static int
-ixgbe_e_tag_filter_add(struct rte_eth_dev *dev,
+ixgbe_e_tag_filter_add(struct ixgbe_adapter *adapter,
 		       struct ixgbe_l2_tunnel_conf *l2_tunnel)
 {
 	int ret = 0;
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
 	uint32_t i, rar_entries;
 	uint32_t rar_low, rar_high;
 
@@ -7753,7 +7754,7 @@ ixgbe_e_tag_filter_add(struct rte_eth_dev *dev,
 	}
 
 	/* One entry for one tunnel. Try to remove potential existing entry. */
-	ixgbe_e_tag_filter_del(dev, l2_tunnel);
+	ixgbe_e_tag_filter_del(adapter, l2_tunnel);
 
 	rar_entries = ixgbe_get_num_rx_addrs(hw);
 
@@ -7842,13 +7843,13 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info,
 
 /* Add l2 tunnel filter */
 int
-ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
+ixgbe_dev_l2_tunnel_filter_add(struct ixgbe_adapter *adapter,
 			       struct ixgbe_l2_tunnel_conf *l2_tunnel,
 			       bool restore)
 {
 	int ret;
 	struct ixgbe_l2_tn_info *l2_tn_info =
-		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter);
 	struct ixgbe_l2_tn_key key;
 	struct ixgbe_l2_tn_filter *node;
 
@@ -7883,7 +7884,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
-		ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
+		ret = ixgbe_e_tag_filter_add(adapter, l2_tunnel);
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "Invalid tunnel type");
@@ -7899,12 +7900,12 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 
 /* Delete l2 tunnel filter */
 int
-ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
+ixgbe_dev_l2_tunnel_filter_del(struct ixgbe_adapter *adapter,
 			       struct ixgbe_l2_tunnel_conf *l2_tunnel)
 {
 	int ret;
 	struct ixgbe_l2_tn_info *l2_tn_info =
-		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter);
 	struct ixgbe_l2_tn_key key;
 
 	key.l2_tn_type = l2_tunnel->l2_tunnel_type;
@@ -7915,7 +7916,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 
 	switch (l2_tunnel->l2_tunnel_type) {
 	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
-		ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
+		ret = ixgbe_e_tag_filter_del(adapter, l2_tunnel);
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "Invalid tunnel type");
@@ -8312,12 +8313,14 @@ int ixgbe_enable_sec_tx_path_generic(struct ixgbe_hw *hw)
 static inline void
 ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev)
 {
+	struct ixgbe_adapter *adapter =
+		IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct ixgbe_filter_info *filter_info =
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
 	struct ixgbe_5tuple_filter *node;
 
 	TAILQ_FOREACH(node, &filter_info->fivetuple_list, entries) {
-		ixgbe_inject_5tuple_filter(dev, node);
+		ixgbe_inject_5tuple_filter(adapter, node);
 	}
 }
 
@@ -8362,8 +8365,10 @@ ixgbe_syn_filter_restore(struct rte_eth_dev *dev)
 static inline void
 ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev)
 {
+	struct ixgbe_adapter *adapter =
+		IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct ixgbe_l2_tn_info *l2_tn_info =
-		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter);
 	struct ixgbe_l2_tn_filter *node;
 	struct ixgbe_l2_tunnel_conf l2_tn_conf;
 
@@ -8371,7 +8376,8 @@ ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev)
 		l2_tn_conf.l2_tunnel_type = node->key.l2_tn_type;
 		l2_tn_conf.tunnel_id      = node->key.tn_id;
 		l2_tn_conf.pool           = node->pool;
-		(void)ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_conf, TRUE);
+		(void)ixgbe_dev_l2_tunnel_filter_add(adapter,
+						     &l2_tn_conf, TRUE);
 	}
 }
 
@@ -8379,11 +8385,13 @@ ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev)
 static inline void
 ixgbe_rss_filter_restore(struct rte_eth_dev *dev)
 {
+	struct ixgbe_adapter *adapter =
+		IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter);
 
 	if (filter_info->rss_info.conf.queue_num)
-		ixgbe_config_rss_filter(dev,
+		ixgbe_config_rss_filter(adapter,
 			&filter_info->rss_info, TRUE);
 }
 
@@ -8420,12 +8428,14 @@ ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev)
 void
 ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev)
 {
+	struct ixgbe_adapter *adapter =
+		IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct ixgbe_filter_info *filter_info =
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
 	struct ixgbe_5tuple_filter *p_5tuple;
 
 	while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list)))
-		ixgbe_remove_5tuple_filter(dev, p_5tuple);
+		ixgbe_remove_5tuple_filter(adapter, p_5tuple);
 }
 
 /* remove all the ether type filters */
@@ -8469,6 +8479,8 @@ ixgbe_clear_syn_filter(struct rte_eth_dev *dev)
 int
 ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
 {
+	struct ixgbe_adapter *adapter =
+			IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct ixgbe_l2_tn_info *l2_tn_info =
 		IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private);
 	struct ixgbe_l2_tn_filter *l2_tn_filter;
@@ -8479,7 +8491,7 @@ ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev)
 		l2_tn_conf.l2_tunnel_type = l2_tn_filter->key.l2_tn_type;
 		l2_tn_conf.tunnel_id      = l2_tn_filter->key.tn_id;
 		l2_tn_conf.pool           = l2_tn_filter->pool;
-		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_conf);
+		ret = ixgbe_dev_l2_tunnel_filter_del(adapter, &l2_tn_conf);
 		if (ret < 0)
 			return ret;
 	}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index f49a179082..65e71e7689 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -525,6 +525,9 @@ uint16_t ixgbe_vf_representor_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts
 #define IXGBE_DEV_PRIVATE_TO_ADAPTER(adapter) \
 	((struct ixgbe_adapter *)adapter)
 
+#define IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter) \
+	(&(adapter)->fdir_conf)
+
 #define IXGBE_DEV_PRIVATE_TO_HW(adapter)\
 	(&((struct ixgbe_adapter *)adapter)->hw)
 
@@ -669,13 +672,13 @@ uint32_t ixgbe_rssrk_reg_get(enum ixgbe_mac_type mac_type, uint8_t i);
 
 bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
 
-int ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
+int ixgbe_add_del_ntuple_filter(struct ixgbe_adapter *adapter,
 			struct rte_eth_ntuple_filter *filter,
 			bool add);
-int ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
+int ixgbe_add_del_ethertype_filter(struct ixgbe_adapter *adapter,
 			struct rte_eth_ethertype_filter *filter,
 			bool add);
-int ixgbe_syn_filter_set(struct rte_eth_dev *dev,
+int ixgbe_syn_filter_set(struct ixgbe_adapter *adapter,
 			struct rte_eth_syn_filter *filter,
 			bool add);
 
@@ -691,22 +694,22 @@ struct ixgbe_l2_tunnel_conf {
 };
 
 int
-ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
+ixgbe_dev_l2_tunnel_filter_add(struct ixgbe_adapter *adapter,
 			       struct ixgbe_l2_tunnel_conf *l2_tunnel,
 			       bool restore);
 int
-ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
+ixgbe_dev_l2_tunnel_filter_del(struct ixgbe_adapter *adapter,
 			       struct ixgbe_l2_tunnel_conf *l2_tunnel);
 void ixgbe_filterlist_init(struct rte_eth_dev *dev);
 void ixgbe_filterlist_flush(struct rte_eth_dev *dev);
 /*
  * Flow director function prototypes
  */
-int ixgbe_fdir_configure(struct rte_eth_dev *dev);
-int ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev);
-int ixgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev,
+int ixgbe_fdir_configure(struct ixgbe_adapter *adapter);
+int ixgbe_fdir_set_input_mask(struct ixgbe_adapter *adapter);
+int ixgbe_fdir_set_flexbytes_offset(struct ixgbe_adapter *adapter,
 				    uint16_t offset);
-int ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+int ixgbe_fdir_filter_program(struct ixgbe_adapter *adapter,
 			      struct ixgbe_fdir_rule *rule,
 			      bool del, bool update);
 void ixgbe_fdir_info_get(struct rte_eth_dev *dev,
@@ -766,7 +769,7 @@ int ixgbe_rss_conf_init(struct ixgbe_rte_flow_rss_conf *out,
 			const struct rte_flow_action_rss *in);
 int ixgbe_action_rss_same(const struct rte_flow_action_rss *comp,
 			  const struct rte_flow_action_rss *with);
-int ixgbe_config_rss_filter(struct rte_eth_dev *dev,
+int ixgbe_config_rss_filter(struct ixgbe_adapter *adapter,
 		struct ixgbe_rte_flow_rss_conf *conf, bool add);
 
 void ixgbe_dev_macsec_register_enable(struct rte_eth_dev *dev,
diff --git a/drivers/net/intel/ixgbe/ixgbe_fdir.c b/drivers/net/intel/ixgbe/ixgbe_fdir.c
index 51557cf68d..e7a380da31 100644
--- a/drivers/net/intel/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/intel/ixgbe/ixgbe_fdir.c
@@ -79,9 +79,9 @@
 #define IXGBE_FDIRIP6M_INNER_MAC_SHIFT 4
 
 static int fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash);
-static int fdir_set_input_mask_82599(struct rte_eth_dev *dev);
-static int fdir_set_input_mask_x550(struct rte_eth_dev *dev);
-static int ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev,
+static int fdir_set_input_mask_82599(struct ixgbe_adapter *adapter);
+static int fdir_set_input_mask_x550(struct ixgbe_adapter *adapter);
+static int ixgbe_set_fdir_flex_conf(struct ixgbe_adapter *adapter,
 		const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl);
 static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
 static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
@@ -248,12 +248,13 @@ reverse_fdir_bitmasks(uint16_t hi_dword, uint16_t lo_dword)
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_82599(struct rte_eth_dev *dev)
+fdir_set_input_mask_82599(struct ixgbe_adapter *adapter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_fdir_conf *fdir_conf =
+		IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter);
 	struct ixgbe_hw_fdir_info *info =
-			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
 	/*
 	 * mask VM pool and DIPv6 since there are currently not supported
 	 * mask FLEX byte, it will be set in flex_conf
@@ -332,12 +333,13 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev)
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_x550(struct rte_eth_dev *dev)
+fdir_set_input_mask_x550(struct ixgbe_adapter *adapter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_fdir_conf *fdir_conf =
+		IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter);
 	struct ixgbe_hw_fdir_info *info =
-			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
 	/* mask VM pool and DIPv6 since there are currently not supported
 	 * mask FLEX byte, it will be set in flex_conf
 	 */
@@ -425,29 +427,30 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 }
 
 int
-ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
+ixgbe_fdir_set_input_mask(struct ixgbe_adapter *adapter)
 {
-	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+	struct rte_eth_fdir_conf *fdir_conf =
+		IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter);
 	enum rte_fdir_mode mode = fdir_conf->mode;
 
 	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
 	    mode <= RTE_FDIR_MODE_PERFECT)
-		return fdir_set_input_mask_82599(dev);
+		return fdir_set_input_mask_82599(adapter);
 	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
 		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
-		return fdir_set_input_mask_x550(dev);
+		return fdir_set_input_mask_x550(adapter);
 
 	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
 	return -ENOTSUP;
 }
 
 int
-ixgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev,
+ixgbe_fdir_set_flexbytes_offset(struct ixgbe_adapter *adapter,
 				uint16_t offset)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
 	struct ixgbe_hw_fdir_info *fdir_info =
-		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
 	uint32_t fdirctrl;
 	int i;
 
@@ -500,12 +503,12 @@ ixgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev,
  * arguments are valid
  */
 static int
-ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev,
+ixgbe_set_fdir_flex_conf(struct ixgbe_adapter *adapter,
 		const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
 	struct ixgbe_hw_fdir_info *info =
-			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
 	const struct rte_eth_flex_payload_cfg *flex_cfg;
 	const struct rte_eth_fdir_flex_mask *flex_mask;
 	uint32_t fdirm;
@@ -563,10 +566,11 @@ ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev,
 }
 
 int
-ixgbe_fdir_configure(struct rte_eth_dev *dev)
+ixgbe_fdir_configure(struct ixgbe_adapter *adapter)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_fdir_conf *fdir_conf =
+		IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter);
 	int err;
 	uint32_t fdirctrl, pbsize;
 	int i;
@@ -613,12 +617,12 @@ ixgbe_fdir_configure(struct rte_eth_dev *dev)
 	for (i = 1; i < 8; i++)
 		IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
 
-	err = ixgbe_fdir_set_input_mask(dev);
+	err = ixgbe_fdir_set_input_mask(adapter);
 	if (err < 0) {
 		PMD_INIT_LOG(ERR, " Error on setting FD mask");
 		return err;
 	}
-	err = ixgbe_set_fdir_flex_conf(dev, &fdir_conf->flex_conf,
+	err = ixgbe_set_fdir_flex_conf(adapter, &fdir_conf->flex_conf,
 				       &fdirctrl);
 	if (err < 0) {
 		PMD_INIT_LOG(ERR, " Error on setting FD flexible arguments.");
@@ -1041,20 +1045,21 @@ ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
 }
 
 int
-ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+ixgbe_fdir_filter_program(struct ixgbe_adapter *adapter,
 			  struct ixgbe_fdir_rule *rule,
 			  bool del,
 			  bool update)
 {
-	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
+	struct rte_eth_fdir_conf *fdir_conf =
+		IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter);
 	uint32_t fdircmd_flags;
 	uint32_t fdirhash;
 	uint8_t queue;
 	bool is_perfect = FALSE;
 	int err;
 	struct ixgbe_hw_fdir_info *info =
-		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
 	enum rte_fdir_mode fdir_mode = fdir_conf->mode;
 	struct ixgbe_fdir_filter *node;
 	bool add_node = FALSE;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 128efd3f86..a6fcfe7574 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -2832,6 +2832,7 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	fdir_conf->drop_queue = IXGBE_FDIR_DROP_QUEUE;
 
@@ -2859,7 +2860,7 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev,
 
 	if (fdir_conf->mode == RTE_FDIR_MODE_NONE) {
 		fdir_conf->mode = rule->mode;
-		ret = ixgbe_fdir_configure(dev);
+		ret = ixgbe_fdir_configure(adapter);
 		if (ret) {
 			fdir_conf->mode = RTE_FDIR_MODE_NONE;
 			return ret;
@@ -2992,11 +2993,13 @@ ixgbe_parse_rss_filter(struct rte_eth_dev *dev,
 static void
 ixgbe_clear_rss_filter(struct rte_eth_dev *dev)
 {
+	struct ixgbe_adapter *adapter =
+		IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct ixgbe_filter_info *filter_info =
 		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
 
 	if (filter_info->rss_info.conf.queue_num)
-		ixgbe_config_rss_filter(dev, &filter_info->rss_info, FALSE);
+		ixgbe_config_rss_filter(adapter, &filter_info->rss_info, FALSE);
 }
 
 void
@@ -3039,13 +3042,15 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 		  struct rte_flow_error *error)
 {
 	int ret;
+	struct ixgbe_adapter *adapter =
+		IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_ntuple_filter ntuple_filter;
 	struct rte_eth_ethertype_filter ethertype_filter;
 	struct rte_eth_syn_filter syn_filter;
 	struct ixgbe_fdir_rule fdir_rule;
 	struct ixgbe_l2_tunnel_conf l2_tn_filter;
 	struct ixgbe_hw_fdir_info *fdir_info =
-		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
 	struct ixgbe_rte_flow_rss_conf rss_conf;
 	struct rte_flow *flow = NULL;
 	struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
@@ -3055,7 +3060,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
 	struct ixgbe_rss_conf_ele *rss_filter_ptr;
 	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
-	struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	uint8_t first_mask = FALSE;
 
 	flow = rte_zmalloc("ixgbe_rte_flow", sizeof(struct rte_flow), 0);
@@ -3088,7 +3092,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 			actions, &ntuple_filter, error);
 
 	if (!ret) {
-		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
+		ret = ixgbe_add_del_ntuple_filter(adapter, &ntuple_filter, TRUE);
 		if (!ret) {
 			ntuple_filter_ptr = rte_zmalloc("ixgbe_ntuple_filter",
 				sizeof(struct ixgbe_ntuple_filter_ele), 0);
@@ -3110,7 +3114,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	ret = ixgbe_parse_ethertype_filter(dev, attr, pattern,
 				actions, &ethertype_filter, error);
 	if (!ret) {
-		ret = ixgbe_add_del_ethertype_filter(dev,
+		ret = ixgbe_add_del_ethertype_filter(adapter,
 				&ethertype_filter, TRUE);
 		if (!ret) {
 			ethertype_filter_ptr = rte_zmalloc(
@@ -3134,7 +3138,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	ret = ixgbe_parse_syn_filter(dev, attr, pattern,
 				actions, &syn_filter, error);
 	if (!ret) {
-		ret = ixgbe_syn_filter_set(dev, &syn_filter, TRUE);
+		ret = ixgbe_syn_filter_set(adapter, &syn_filter, TRUE);
 		if (!ret) {
 			syn_filter_ptr = rte_zmalloc("ixgbe_syn_filter",
 				sizeof(struct ixgbe_eth_syn_filter_ele), 0);
@@ -3163,12 +3167,12 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 				*&fdir_info->mask = *&fdir_rule.mask;
 
 				if (fdir_rule.mask.flex_bytes_mask) {
-					ret = ixgbe_fdir_set_flexbytes_offset(dev,
+					ret = ixgbe_fdir_set_flexbytes_offset(adapter,
 						fdir_rule.flex_bytes_offset);
 					if (ret)
 						goto out;
 				}
-				ret = ixgbe_fdir_set_input_mask(dev);
+				ret = ixgbe_fdir_set_input_mask(adapter);
 				if (ret)
 					goto out;
 
@@ -3193,7 +3197,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 		}
 
 		if (fdir_rule.b_spec) {
-			ret = ixgbe_fdir_filter_program(dev, &fdir_rule,
+			ret = ixgbe_fdir_filter_program(adapter, &fdir_rule,
 					FALSE, FALSE);
 			if (!ret) {
 				fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter",
@@ -3230,7 +3234,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	ret = ixgbe_parse_l2_tn_filter(dev, attr, pattern,
 					actions, &l2_tn_filter, error);
 	if (!ret) {
-		ret = ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_filter, FALSE);
+		ret = ixgbe_dev_l2_tunnel_filter_add(adapter, &l2_tn_filter, FALSE);
 		if (!ret) {
 			l2_tn_filter_ptr = rte_zmalloc("ixgbe_l2_tn_filter",
 				sizeof(struct ixgbe_eth_l2_tunnel_conf_ele), 0);
@@ -3251,7 +3255,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	ret = ixgbe_parse_rss_filter(dev, attr,
 					actions, &rss_conf, error);
 	if (!ret) {
-		ret = ixgbe_config_rss_filter(dev, &rss_conf, TRUE);
+		ret = ixgbe_config_rss_filter(adapter, &rss_conf, TRUE);
 		if (!ret) {
 			rss_filter_ptr = rte_zmalloc("ixgbe_rss_filter",
 				sizeof(struct ixgbe_rss_conf_ele), 0);
@@ -3349,6 +3353,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		struct rte_flow_error *error)
 {
 	int ret;
+	struct ixgbe_adapter *adapter =
+		IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_flow *pmd_flow = flow;
 	enum rte_filter_type filter_type = pmd_flow->filter_type;
 	struct rte_eth_ntuple_filter ntuple_filter;
@@ -3362,9 +3368,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 	struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
 	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
 	struct ixgbe_filter_ele_base *flow_mem_base;
-	struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *fdir_info =
-		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
 	struct ixgbe_rss_conf_ele *rss_filter_ptr;
 
 	/* Validate ownership before touching HW/SW state. */
@@ -3394,7 +3399,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		rte_memcpy(&ntuple_filter,
 			&ntuple_filter_ptr->filter_info,
 			sizeof(struct rte_eth_ntuple_filter));
-		ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
+		ret = ixgbe_add_del_ntuple_filter(adapter, &ntuple_filter, FALSE);
 		if (!ret)
 			rte_free(ntuple_filter_ptr);
 		break;
@@ -3404,7 +3409,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		rte_memcpy(&ethertype_filter,
 			&ethertype_filter_ptr->filter_info,
 			sizeof(struct rte_eth_ethertype_filter));
-		ret = ixgbe_add_del_ethertype_filter(dev,
+		ret = ixgbe_add_del_ethertype_filter(adapter,
 				&ethertype_filter, FALSE);
 		if (!ret)
 			rte_free(ethertype_filter_ptr);
@@ -3415,7 +3420,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		rte_memcpy(&syn_filter,
 			&syn_filter_ptr->filter_info,
 			sizeof(struct rte_eth_syn_filter));
-		ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE);
+		ret = ixgbe_syn_filter_set(adapter, &syn_filter, FALSE);
 		if (!ret)
 			rte_free(syn_filter_ptr);
 		break;
@@ -3424,7 +3429,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		rte_memcpy(&fdir_rule,
 			&fdir_rule_ptr->filter_info,
 			sizeof(struct ixgbe_fdir_rule));
-		ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
+		ret = ixgbe_fdir_filter_program(adapter, &fdir_rule, TRUE, FALSE);
 		if (!ret) {
 			struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 			rte_free(fdir_rule_ptr);
@@ -3441,14 +3446,14 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 				pmd_flow->rule;
 		rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
 			sizeof(struct ixgbe_l2_tunnel_conf));
-		ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
+		ret = ixgbe_dev_l2_tunnel_filter_del(adapter, &l2_tn_filter);
 		if (!ret)
 			rte_free(l2_tn_filter_ptr);
 		break;
 	case RTE_ETH_FILTER_HASH:
 		rss_filter_ptr = (struct ixgbe_rss_conf_ele *)
 				pmd_flow->rule;
-		ret = ixgbe_config_rss_filter(dev,
+		ret = ixgbe_config_rss_filter(adapter,
 					&rss_filter_ptr->filter_info, FALSE);
 		if (!ret)
 			rte_free(rss_filter_ptr);
diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
index 3be0f0492a..60222693fe 100644
--- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c
@@ -6132,7 +6132,7 @@ ixgbe_action_rss_same(const struct rte_flow_action_rss *comp,
 }
 
 int
-ixgbe_config_rss_filter(struct rte_eth_dev *dev,
+ixgbe_config_rss_filter(struct ixgbe_adapter *adapter,
 		struct ixgbe_rte_flow_rss_conf *conf, bool add)
 {
 	struct ixgbe_hw *hw;
@@ -6148,17 +6148,17 @@ ixgbe_config_rss_filter(struct rte_eth_dev *dev,
 		.rss_hf = conf->conf.types,
 	};
 	struct ixgbe_filter_info *filter_info =
-		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
+		IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter);
 
 	PMD_INIT_FUNC_TRACE();
-	hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
 
 	sp_reta_size = ixgbe_reta_size_get(hw->mac.type);
 
 	if (!add) {
 		if (ixgbe_action_rss_same(&filter_info->rss_info.conf,
 					  &conf->conf)) {
-			ixgbe_rss_disable(dev);
+			ixgbe_mrqc_rss_remove(hw);
 			memset(&filter_info->rss_info, 0,
 				sizeof(struct ixgbe_rte_flow_rss_conf));
 			return 0;
@@ -6188,7 +6188,7 @@ ixgbe_config_rss_filter(struct rte_eth_dev *dev,
 	 * the RSS hash of input packets.
 	 */
 	if ((rss_conf.rss_hf & IXGBE_RSS_OFFLOAD_ALL) == 0) {
-		ixgbe_rss_disable(dev);
+		ixgbe_mrqc_rss_remove(hw);
 		return 0;
 	}
 	if (rss_conf.rss_key == NULL)
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 12/15] net/ixgbe: support protocol-only TCP and UDP rules
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (10 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 11/15] net/ixgbe: use adapter in flow-related calls Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 13/15] net/ixgbe: write drop queue at init Anatoly Burakov
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin

Currently, ixgbe `rte_flow` parsing requires a mask for TCP and UDP
items. This means TCP and UDP FDIR rules can only be programmed as
port-based matches, while protocol-only matches are rejected even though
they are supported in hardware.

Allow TCP and UDP items without a mask so `rte_flow` can express broad L4
matches that care about the protocol only. This makes TCP and UDP
handling consistent with SCTP.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_flow.c | 130 ++++++++++++++-------------
 1 file changed, 67 insertions(+), 63 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index a6fcfe7574..eae81462f6 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -2055,42 +2055,44 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 				item, "Not supported last point for range");
 			return -rte_errno;
 		}
-		/**
-		 * Only care about src & dst ports,
-		 * others should be masked.
-		 */
-		if (!item->mask) {
-			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
-			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by fdir filter");
-			return -rte_errno;
-		}
-		rule->b_mask = TRUE;
 		tcp_mask = item->mask;
-		if (tcp_mask->hdr.sent_seq ||
-		    tcp_mask->hdr.recv_ack ||
-		    tcp_mask->hdr.data_off ||
-		    tcp_mask->hdr.tcp_flags ||
-		    tcp_mask->hdr.rx_win ||
-		    tcp_mask->hdr.cksum ||
-		    tcp_mask->hdr.tcp_urp) {
-			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
-			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by fdir filter");
-			return -rte_errno;
-		}
-		rule->mask.src_port_mask = tcp_mask->hdr.src_port;
-		rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
+		if (tcp_mask != NULL) {
+			/**
+			 * Only care about src & dst ports,
+			 * others should be masked.
+			 */
+			rule->b_mask = TRUE;
+			if (tcp_mask->hdr.sent_seq ||
+			    tcp_mask->hdr.recv_ack ||
+			    tcp_mask->hdr.data_off ||
+			    tcp_mask->hdr.tcp_flags ||
+			    tcp_mask->hdr.rx_win ||
+			    tcp_mask->hdr.cksum ||
+			    tcp_mask->hdr.tcp_urp) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+			rule->mask.src_port_mask = tcp_mask->hdr.src_port;
+			rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
 
-		if (item->spec) {
-			rule->b_spec = TRUE;
-			tcp_spec = item->spec;
-			rule->ixgbe_fdir.formatted.src_port =
-				tcp_spec->hdr.src_port;
-			rule->ixgbe_fdir.formatted.dst_port =
-				tcp_spec->hdr.dst_port;
+			if (item->spec) {
+				rule->b_spec = TRUE;
+				tcp_spec = item->spec;
+				rule->ixgbe_fdir.formatted.src_port =
+					tcp_spec->hdr.src_port;
+				rule->ixgbe_fdir.formatted.dst_port =
+					tcp_spec->hdr.dst_port;
+			}
+		} else if (item->spec != NULL) {
+			/* No port mask means protocol-only match; spec is invalid. */
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
 		}
 
 		item = next_no_fuzzy_pattern(pattern, item);
@@ -2120,37 +2122,39 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
 				item, "Not supported last point for range");
 			return -rte_errno;
 		}
-		/**
-		 * Only care about src & dst ports,
-		 * others should be masked.
-		 */
-		if (!item->mask) {
-			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
-			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by fdir filter");
-			return -rte_errno;
-		}
-		rule->b_mask = TRUE;
 		udp_mask = item->mask;
-		if (udp_mask->hdr.dgram_len ||
-		    udp_mask->hdr.dgram_cksum) {
-			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
-			rte_flow_error_set(error, EINVAL,
-				RTE_FLOW_ERROR_TYPE_ITEM,
-				item, "Not supported by fdir filter");
-			return -rte_errno;
-		}
-		rule->mask.src_port_mask = udp_mask->hdr.src_port;
-		rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
+		if (udp_mask != NULL) {
+			/**
+			 * Only care about src & dst ports,
+			 * others should be masked.
+			 */
+			rule->b_mask = TRUE;
+			if (udp_mask->hdr.dgram_len ||
+			    udp_mask->hdr.dgram_cksum) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				rte_flow_error_set(error, EINVAL,
+					RTE_FLOW_ERROR_TYPE_ITEM,
+					item, "Not supported by fdir filter");
+				return -rte_errno;
+			}
+			rule->mask.src_port_mask = udp_mask->hdr.src_port;
+			rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
 
-		if (item->spec) {
-			rule->b_spec = TRUE;
-			udp_spec = item->spec;
-			rule->ixgbe_fdir.formatted.src_port =
-				udp_spec->hdr.src_port;
-			rule->ixgbe_fdir.formatted.dst_port =
-				udp_spec->hdr.dst_port;
+			if (item->spec) {
+				rule->b_spec = TRUE;
+				udp_spec = item->spec;
+				rule->ixgbe_fdir.formatted.src_port =
+					udp_spec->hdr.src_port;
+				rule->ixgbe_fdir.formatted.dst_port =
+					udp_spec->hdr.dst_port;
+			}
+		} else if (item->spec != NULL) {
+			/* No port mask means protocol-only match; spec is invalid. */
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ITEM,
+				item, "Not supported by fdir filter");
+			return -rte_errno;
 		}
 
 		item = next_no_fuzzy_pattern(pattern, item);
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 13/15] net/ixgbe: write drop queue at init
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (11 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 12/15] net/ixgbe: support protocol-only TCP and UDP rules Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 14/15] net/ixgbe: rely less on global flow state Anatoly Burakov
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin

For FDIR, drop queue index never changes, so write it at init.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_ethdev.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 80d70fe083..62edada379 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -1468,6 +1468,9 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
 	fdir_info->n_flows = 0;
 	fdir_info->mask_added = FALSE;
 
+	/* drop queue is always fixed */
+	IXGBE_DEV_FDIR_CONF(eth_dev)->drop_queue = IXGBE_FDIR_DROP_QUEUE;
+
 	return 0;
 }
 
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 14/15] net/ixgbe: rely less on global flow state
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (12 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 13/15] net/ixgbe: write drop queue at init Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-04-30 11:14 ` [PATCH v1 15/15] net/ixgbe: refactor flow creation Anatoly Burakov
  2026-05-06  9:27 ` [PATCH v1 00/15] IXGBE fixes and cleanups Bruce Richardson
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin

FDIR setup helpers were implicitly reading and updating global state.
This made behavior depend on cached values and tightly coupled call
ordering to hidden side effects.

Update `ixgbe_fdir_configure()`, `ixgbe_fdir_set_input_mask()`, and
`ixgbe_fdir_filter_program()` to take explicit inputs (`fdir_conf`,
`mask`, `mode`) and make the internal mask helpers use those arguments
instead of reading globals.

Also remove implicit cached-state behavior from
`ixgbe_fdir_set_flexbytes_offset()`: it no longer short-circuits on
cached offset and no longer updates cached offset internally.

Adjust all current callers to match the new semantics:
- pass local config/mask/mode values into setup APIs,
- write global mode/mask/flex cached state in caller code only after
  successful programming.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_ethdev.c |   4 +-
 drivers/net/intel/ixgbe/ixgbe_ethdev.h |  13 +++-
 drivers/net/intel/ixgbe/ixgbe_fdir.c   | 104 +++++++++++--------------
 drivers/net/intel/ixgbe/ixgbe_flow.c   |  32 +++++---
 4 files changed, 79 insertions(+), 74 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 62edada379..d4122107ac 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -2724,7 +2724,9 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	ixgbe_configure_dcb(dev);
 
 	if (fdir_conf->mode != RTE_FDIR_MODE_NONE) {
-		err = ixgbe_fdir_configure(adapter);
+		struct ixgbe_hw_fdir_info *info =
+			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
+		err = ixgbe_fdir_configure(adapter, fdir_conf, &info->mask);
 		if (err)
 			goto error;
 	}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 65e71e7689..165bd93379 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -705,13 +705,18 @@ void ixgbe_filterlist_flush(struct rte_eth_dev *dev);
 /*
  * Flow director function prototypes
  */
-int ixgbe_fdir_configure(struct ixgbe_adapter *adapter);
-int ixgbe_fdir_set_input_mask(struct ixgbe_adapter *adapter);
+int ixgbe_fdir_configure(struct ixgbe_adapter *adapter,
+			 const struct rte_eth_fdir_conf *fdir_conf,
+			 const struct ixgbe_hw_fdir_mask *fdir_mask);
+int ixgbe_fdir_set_input_mask(struct ixgbe_adapter *adapter,
+			      const struct ixgbe_hw_fdir_mask *mask,
+			      enum rte_fdir_mode mode);
 int ixgbe_fdir_set_flexbytes_offset(struct ixgbe_adapter *adapter,
 				    uint16_t offset);
 int ixgbe_fdir_filter_program(struct ixgbe_adapter *adapter,
-			      struct ixgbe_fdir_rule *rule,
-			      bool del, bool update);
+		struct rte_eth_fdir_conf *fdir_conf,
+		struct ixgbe_fdir_rule *rule,
+		bool del, bool update);
 void ixgbe_fdir_info_get(struct rte_eth_dev *dev,
 			 struct rte_eth_fdir_info *fdir_info);
 void ixgbe_fdir_stats_get(struct rte_eth_dev *dev,
diff --git a/drivers/net/intel/ixgbe/ixgbe_fdir.c b/drivers/net/intel/ixgbe/ixgbe_fdir.c
index e7a380da31..c2828df1f9 100644
--- a/drivers/net/intel/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/intel/ixgbe/ixgbe_fdir.c
@@ -79,8 +79,12 @@
 #define IXGBE_FDIRIP6M_INNER_MAC_SHIFT 4
 
 static int fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash);
-static int fdir_set_input_mask_82599(struct ixgbe_adapter *adapter);
-static int fdir_set_input_mask_x550(struct ixgbe_adapter *adapter);
+static int fdir_set_input_mask_82599(struct ixgbe_adapter *adapter,
+		const struct ixgbe_hw_fdir_mask *mask,
+		enum rte_fdir_mode mode);
+static int fdir_set_input_mask_x550(struct ixgbe_adapter *adapter,
+		const struct ixgbe_hw_fdir_mask *mask,
+		enum rte_fdir_mode mode);
 static int ixgbe_set_fdir_flex_conf(struct ixgbe_adapter *adapter,
 		const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl);
 static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
@@ -248,13 +252,11 @@ reverse_fdir_bitmasks(uint16_t hi_dword, uint16_t lo_dword)
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_82599(struct ixgbe_adapter *adapter)
+fdir_set_input_mask_82599(struct ixgbe_adapter *adapter,
+		const struct ixgbe_hw_fdir_mask *mask,
+		enum rte_fdir_mode mode)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
-	struct rte_eth_fdir_conf *fdir_conf =
-		IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter);
-	struct ixgbe_hw_fdir_info *info =
-			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
 	/*
 	 * mask VM pool and DIPv6 since there are currently not supported
 	 * mask FLEX byte, it will be set in flex_conf
@@ -266,34 +268,34 @@ fdir_set_input_mask_82599(struct ixgbe_adapter *adapter)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (!info->mask.l4_proto_match)
+	if (!mask->l4_proto_match)
 		/* set L4P to ignore L4 protocol for IP traffic */
 		fdirm |= IXGBE_FDIRM_L4P;
 
-	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (info->mask.vlan_tci_mask == 0)
+	else if (mask->vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
 
 	/* flex byte mask */
-	if (info->mask.flex_bytes_mask == 0)
+	if (mask->flex_bytes_mask == 0)
 		fdirm |= IXGBE_FDIRM_FLEX;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
 	/* store the TCP/UDP port masks, bit reversed from port layout */
 	fdirtcpm = reverse_fdir_bitmasks(
-			rte_be_to_cpu_16(info->mask.dst_port_mask),
-			rte_be_to_cpu_16(info->mask.src_port_mask));
+			rte_be_to_cpu_16(mask->dst_port_mask),
+			rte_be_to_cpu_16(mask->src_port_mask));
 
 	/* write all the same so that UDP, TCP and SCTP use the same mask
 	 * (little-endian)
@@ -311,16 +313,16 @@ fdir_set_input_mask_82599(struct ixgbe_adapter *adapter)
 	 * can not use IXGBE_WRITE_REG.
 	 */
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRSIP4M);
-	*reg = ~(info->mask.src_ipv4_mask);
+	*reg = ~(mask->src_ipv4_mask);
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRDIP4M);
-	*reg = ~(info->mask.dst_ipv4_mask);
+	*reg = ~(mask->dst_ipv4_mask);
 
-	if (fdir_conf->mode == RTE_FDIR_MODE_SIGNATURE) {
+	if (mode == RTE_FDIR_MODE_SIGNATURE) {
 		/*
 		 * Store source and destination IPv6 masks (bit reversed)
 		 */
-		fdiripv6m = (info->mask.dst_ipv6_mask << 16) |
-			    info->mask.src_ipv6_mask;
+		fdiripv6m = (mask->dst_ipv6_mask << 16) |
+				mask->src_ipv6_mask;
 
 		IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, ~fdiripv6m);
 	}
@@ -333,20 +335,17 @@ fdir_set_input_mask_82599(struct ixgbe_adapter *adapter)
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_x550(struct ixgbe_adapter *adapter)
+fdir_set_input_mask_x550(struct ixgbe_adapter *adapter,
+		const struct ixgbe_hw_fdir_mask *mask,
+		enum rte_fdir_mode mode)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
-	struct rte_eth_fdir_conf *fdir_conf =
-		IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter);
-	struct ixgbe_hw_fdir_info *info =
-			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
 	/* mask VM pool and DIPv6 since there are currently not supported
 	 * mask FLEX byte, it will be set in flex_conf
 	 */
 	uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 |
 			 IXGBE_FDIRM_FLEX;
 	uint32_t fdiripv6m;
-	enum rte_fdir_mode mode = fdir_conf->mode;
 	uint16_t mac_mask;
 
 	PMD_INIT_FUNC_TRACE();
@@ -358,16 +357,16 @@ fdir_set_input_mask_x550(struct ixgbe_adapter *adapter)
 	/* some bits must be set for mac vlan or tunnel mode */
 	fdirm |= IXGBE_FDIRM_L4P | IXGBE_FDIRM_L3P;
 
-	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (info->mask.vlan_tci_mask == 0)
+	else if (mask->vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
@@ -382,13 +381,13 @@ fdir_set_input_mask_x550(struct ixgbe_adapter *adapter)
 
 	if (mode == RTE_FDIR_MODE_PERFECT_TUNNEL) {
 		fdiripv6m |= IXGBE_FDIRIP6M_INNER_MAC;
-		mac_mask = info->mask.mac_addr_byte_mask &
+		mac_mask = mask->mac_addr_byte_mask &
 			(IXGBE_FDIRIP6M_INNER_MAC >>
 			IXGBE_FDIRIP6M_INNER_MAC_SHIFT);
 		fdiripv6m &= ~((mac_mask << IXGBE_FDIRIP6M_INNER_MAC_SHIFT) &
 				IXGBE_FDIRIP6M_INNER_MAC);
 
-		switch (info->mask.tunnel_type_mask) {
+		switch (mask->tunnel_type_mask) {
 		case 0:
 			/* Mask tunnel type */
 			fdiripv6m |= IXGBE_FDIRIP6M_TUNNEL_TYPE;
@@ -400,7 +399,7 @@ fdir_set_input_mask_x550(struct ixgbe_adapter *adapter)
 			return -EINVAL;
 		}
 
-		switch (rte_be_to_cpu_32(info->mask.tunnel_id_mask)) {
+		switch (rte_be_to_cpu_32(mask->tunnel_id_mask)) {
 		case 0x0:
 			/* Mask vxlan id */
 			fdiripv6m |= IXGBE_FDIRIP6M_TNI_VNI;
@@ -427,36 +426,28 @@ fdir_set_input_mask_x550(struct ixgbe_adapter *adapter)
 }
 
 int
-ixgbe_fdir_set_input_mask(struct ixgbe_adapter *adapter)
+ixgbe_fdir_set_input_mask(struct ixgbe_adapter *adapter,
+		const struct ixgbe_hw_fdir_mask *mask,
+		enum rte_fdir_mode mode)
 {
-	struct rte_eth_fdir_conf *fdir_conf =
-		IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter);
-	enum rte_fdir_mode mode = fdir_conf->mode;
-
 	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
 	    mode <= RTE_FDIR_MODE_PERFECT)
-		return fdir_set_input_mask_82599(adapter);
+		return fdir_set_input_mask_82599(adapter, mask, mode);
 	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
 		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
-		return fdir_set_input_mask_x550(adapter);
+		return fdir_set_input_mask_x550(adapter, mask, mode);
 
 	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
 	return -ENOTSUP;
 }
 
 int
-ixgbe_fdir_set_flexbytes_offset(struct ixgbe_adapter *adapter,
-				uint16_t offset)
+ixgbe_fdir_set_flexbytes_offset(struct ixgbe_adapter *adapter, uint16_t offset)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
-	struct ixgbe_hw_fdir_info *fdir_info =
-		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
 	uint32_t fdirctrl;
 	int i;
 
-	if (fdir_info->flex_bytes_offset == offset)
-		return 0;
-
 	/**
 	 * 82599 adapters flow director init flow cannot be restarted,
 	 * Workaround 82599 silicon errata by performing the following steps
@@ -493,8 +484,6 @@ ixgbe_fdir_set_flexbytes_offset(struct ixgbe_adapter *adapter,
 		return -ETIMEDOUT;
 	}
 
-	fdir_info->flex_bytes_offset = offset;
-
 	return 0;
 }
 
@@ -566,11 +555,11 @@ ixgbe_set_fdir_flex_conf(struct ixgbe_adapter *adapter,
 }
 
 int
-ixgbe_fdir_configure(struct ixgbe_adapter *adapter)
+ixgbe_fdir_configure(struct ixgbe_adapter *adapter,
+		const struct rte_eth_fdir_conf *fdir_conf,
+		const struct ixgbe_hw_fdir_mask *fdir_mask)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
-	struct rte_eth_fdir_conf *fdir_conf =
-		IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter);
 	int err;
 	uint32_t fdirctrl, pbsize;
 	int i;
@@ -617,7 +606,7 @@ ixgbe_fdir_configure(struct ixgbe_adapter *adapter)
 	for (i = 1; i < 8; i++)
 		IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
 
-	err = ixgbe_fdir_set_input_mask(adapter);
+	err = ixgbe_fdir_set_input_mask(adapter, fdir_mask, mode);
 	if (err < 0) {
 		PMD_INIT_LOG(ERR, " Error on setting FD mask");
 		return err;
@@ -1046,13 +1035,12 @@ ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
 
 int
 ixgbe_fdir_filter_program(struct ixgbe_adapter *adapter,
-			  struct ixgbe_fdir_rule *rule,
-			  bool del,
-			  bool update)
+		struct rte_eth_fdir_conf *fdir_conf,
+		struct ixgbe_fdir_rule *rule,
+		bool del,
+		bool update)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter);
-	struct rte_eth_fdir_conf *fdir_conf =
-		IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter);
 	uint32_t fdircmd_flags;
 	uint32_t fdirhash;
 	uint8_t queue;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index eae81462f6..cda2e95d22 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -2838,7 +2838,6 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
-	fdir_conf->drop_queue = IXGBE_FDIR_DROP_QUEUE;
 
 	if (hw->mac.type != ixgbe_mac_82599EB &&
 		hw->mac.type != ixgbe_mac_X540 &&
@@ -2863,12 +2862,15 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev,
 step_next:
 
 	if (fdir_conf->mode == RTE_FDIR_MODE_NONE) {
-		fdir_conf->mode = rule->mode;
-		ret = ixgbe_fdir_configure(adapter);
+		struct rte_eth_fdir_conf fc = *fdir_conf;
+
+		fc.mode = rule->mode;
+		ret = ixgbe_fdir_configure(adapter, &fc, &rule->mask);
 		if (ret) {
 			fdir_conf->mode = RTE_FDIR_MODE_NONE;
 			return ret;
 		}
+		fdir_conf->mode = rule->mode;
 	} else if (fdir_conf->mode != rule->mode) {
 		return -ENOTSUP;
 	}
@@ -3167,19 +3169,25 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 		/* A mask cannot be deleted. */
 		if (fdir_rule.b_mask) {
 			if (!fdir_info->mask_added) {
-				/* It's the first time the mask is set. */
-				*&fdir_info->mask = *&fdir_rule.mask;
+				bool flex_byte_offset_changed =
+					fdir_info->flex_bytes_offset !=
+					fdir_rule.flex_bytes_offset;
 
-				if (fdir_rule.mask.flex_bytes_mask) {
+				if (fdir_rule.mask.flex_bytes_mask &&
+				    flex_byte_offset_changed) {
 					ret = ixgbe_fdir_set_flexbytes_offset(adapter,
 						fdir_rule.flex_bytes_offset);
 					if (ret)
 						goto out;
 				}
-				ret = ixgbe_fdir_set_input_mask(adapter);
+				ret = ixgbe_fdir_set_input_mask(adapter,
+						&fdir_rule.mask, fdir_rule.mode);
 				if (ret)
 					goto out;
 
+				fdir_info->mask = fdir_rule.mask;
+				fdir_info->flex_bytes_offset =
+					fdir_rule.flex_bytes_offset;
 				fdir_info->mask_added = TRUE;
 				first_mask = TRUE;
 			} else {
@@ -3201,8 +3209,10 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 		}
 
 		if (fdir_rule.b_spec) {
-			ret = ixgbe_fdir_filter_program(adapter, &fdir_rule,
-					FALSE, FALSE);
+			struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+
+			ret = ixgbe_fdir_filter_program(adapter, fdir_conf,
+					&fdir_rule, FALSE, FALSE);
 			if (!ret) {
 				fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter",
 					sizeof(struct ixgbe_fdir_rule_ele), 0);
@@ -3374,6 +3384,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 	struct ixgbe_filter_ele_base *flow_mem_base;
 	struct ixgbe_hw_fdir_info *fdir_info =
 		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 	struct ixgbe_rss_conf_ele *rss_filter_ptr;
 
 	/* Validate ownership before touching HW/SW state. */
@@ -3433,9 +3444,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
 		rte_memcpy(&fdir_rule,
 			&fdir_rule_ptr->filter_info,
 			sizeof(struct ixgbe_fdir_rule));
-		ret = ixgbe_fdir_filter_program(adapter, &fdir_rule, TRUE, FALSE);
+		ret = ixgbe_fdir_filter_program(adapter, fdir_conf, &fdir_rule, TRUE, FALSE);
 		if (!ret) {
-			struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 			rte_free(fdir_rule_ptr);
 			if (fdir_info->n_flows > 0 && --(fdir_info->n_flows) == 0) {
 				fdir_info->mask_added = false;
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v1 15/15] net/ixgbe: refactor flow creation
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (13 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 14/15] net/ixgbe: rely less on global flow state Anatoly Burakov
@ 2026-04-30 11:14 ` Anatoly Burakov
  2026-05-06  9:27 ` [PATCH v1 00/15] IXGBE fixes and cleanups Bruce Richardson
  15 siblings, 0 replies; 20+ messages in thread
From: Anatoly Burakov @ 2026-04-30 11:14 UTC (permalink / raw)
  To: dev, Vladimir Medvedkin

For FDIR, the flow creation process is very complex and tangled in global
state updates. To simplify the code, refactor global state handling and
move it out of the main flow create function.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 drivers/net/intel/ixgbe/ixgbe_flow.c | 243 ++++++++++++++++-----------
 1 file changed, 144 insertions(+), 99 deletions(-)

diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index cda2e95d22..7c0cdf8b74 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -2828,16 +2828,14 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
 
 static int
 ixgbe_parse_fdir_filter(struct rte_eth_dev *dev,
-			const struct rte_flow_attr *attr,
-			const struct rte_flow_item pattern[],
-			const struct rte_flow_action actions[],
-			struct ixgbe_fdir_rule *rule,
-			struct rte_flow_error *error)
+		const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule,
+		struct rte_flow_error *error)
 {
 	int ret;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
-	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
 
 	if (hw->mac.type != ixgbe_mac_82599EB &&
 		hw->mac.type != ixgbe_mac_X540 &&
@@ -2849,36 +2847,131 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev,
 
 	ret = ixgbe_parse_fdir_filter_normal(dev, attr, pattern,
 					actions, rule, error);
-
 	if (!ret)
-		goto step_next;
+		return 0;
 
-	ret = ixgbe_parse_fdir_filter_tunnel(attr, pattern,
+	return ixgbe_parse_fdir_filter_tunnel(attr, pattern,
 					actions, rule, error);
+}
 
+static int
+ixgbe_fdir_process_rule(struct ixgbe_adapter *adapter,
+		struct ixgbe_hw_fdir_info *fdir_info,
+		struct ixgbe_fdir_rule *fdir_rule,
+		bool *first_mask,
+		struct rte_flow_error *error)
+{
+	bool flex_byte_offset_changed;
+	int ret;
+
+	/* rule must have a spec to be valid */
+	if (!fdir_rule->b_spec)
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			NULL, "No filter spec");
+
+	/* if rule doesn't have a mask, nothing to be done */
+	if (!fdir_rule->b_mask)
+		return 0;
+
+	/* if we already have a mask, check if it's compatible */
+	if (fdir_info->mask_added) {
+		ret = memcmp(&fdir_info->mask, &fdir_rule->mask,
+				sizeof(struct ixgbe_hw_fdir_mask));
+		if (ret)
+			return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL, "Mask mismatch");
+
+		if (fdir_rule->mask.flex_bytes_mask &&
+		    fdir_info->flex_bytes_offset !=
+		    fdir_rule->flex_bytes_offset)
+			return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL, "Flex bytes offset mismatch");
+
+		/* success */
+		return 0;
+	}
+
+	/* we don't have a mask yet, so set it up based on this rule */
+	flex_byte_offset_changed =
+			fdir_info->flex_bytes_offset !=
+			fdir_rule->flex_bytes_offset;
+
+	if (fdir_rule->mask.flex_bytes_mask != 0 && flex_byte_offset_changed) {
+		ret = ixgbe_fdir_set_flexbytes_offset(adapter,
+				fdir_rule->flex_bytes_offset);
+		if (ret)
+			return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"Failed to set flex bytes offset");
+	}
+
+	ret = ixgbe_fdir_set_input_mask(adapter, &fdir_rule->mask,
+			fdir_rule->mode);
 	if (ret)
-		return ret;
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			NULL, "Failed to set fdir mask");
 
-step_next:
+	/* let the caller know that we've installed a mask */
+	*first_mask = true;
 
+	return 0;
+}
+
+static int
+ixgbe_fdir_flow_program(struct rte_eth_dev *dev,
+		struct ixgbe_adapter *adapter,
+		struct ixgbe_fdir_rule *fdir_rule,
+		bool *first_mask,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+	struct rte_eth_fdir_conf local_fdir_conf = *fdir_conf;
+	struct ixgbe_hw_fdir_info *fdir_info =
+			IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
+	int ret;
+
+	if (fdir_rule->queue >= dev->data->nb_rx_queues) {
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION,
+			NULL, "queue id > max number of queues");
+	}
+
+	local_fdir_conf.mode = fdir_rule->mode;
+
+	/* Configure FDIR mode if this is the first filter */
 	if (fdir_conf->mode == RTE_FDIR_MODE_NONE) {
-		struct rte_eth_fdir_conf fc = *fdir_conf;
-
-		fc.mode = rule->mode;
-		ret = ixgbe_fdir_configure(adapter, &fc, &rule->mask);
+		ret = ixgbe_fdir_configure(adapter, &local_fdir_conf, &fdir_rule->mask);
 		if (ret) {
-			fdir_conf->mode = RTE_FDIR_MODE_NONE;
-			return ret;
+			return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL, "Failed to configure fdir mode");
 		}
-		fdir_conf->mode = rule->mode;
-	} else if (fdir_conf->mode != rule->mode) {
-		return -ENOTSUP;
+	} else if (fdir_conf->mode != fdir_rule->mode) {
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			NULL, "Conflict with existing fdir mode");
 	}
 
-	if (rule->queue >= dev->data->nb_rx_queues)
-		return -ENOTSUP;
+	/* Process and validate rule spec and mask */
+	ret = ixgbe_fdir_process_rule(adapter, fdir_info, fdir_rule,
+		first_mask, error);
+	if (ret)
+		return ret;
 
-	return ret;
+	/* Program the filter */
+	ret = ixgbe_fdir_filter_program(adapter, &local_fdir_conf,
+			fdir_rule, FALSE, FALSE);
+	if (ret)
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			NULL, "Failed to add fdir filter");
+
+	return 0;
 }
 
 static int
@@ -3066,7 +3159,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
 	struct ixgbe_rss_conf_ele *rss_filter_ptr;
 	struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
-	uint8_t first_mask = FALSE;
 
 	flow = rte_zmalloc("ixgbe_rte_flow", sizeof(struct rte_flow), 0);
 	if (!flow) {
@@ -3166,82 +3258,35 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
 	ret = ixgbe_parse_fdir_filter(dev, attr, pattern,
 				actions, &fdir_rule, error);
 	if (!ret) {
-		/* A mask cannot be deleted. */
-		if (fdir_rule.b_mask) {
-			if (!fdir_info->mask_added) {
-				bool flex_byte_offset_changed =
-					fdir_info->flex_bytes_offset !=
-					fdir_rule.flex_bytes_offset;
-
-				if (fdir_rule.mask.flex_bytes_mask &&
-				    flex_byte_offset_changed) {
-					ret = ixgbe_fdir_set_flexbytes_offset(adapter,
-						fdir_rule.flex_bytes_offset);
-					if (ret)
-						goto out;
-				}
-				ret = ixgbe_fdir_set_input_mask(adapter,
-						&fdir_rule.mask, fdir_rule.mode);
-				if (ret)
-					goto out;
-
-				fdir_info->mask = fdir_rule.mask;
-				fdir_info->flex_bytes_offset =
-					fdir_rule.flex_bytes_offset;
-				fdir_info->mask_added = TRUE;
-				first_mask = TRUE;
-			} else {
-				/**
-				 * Only support one global mask,
-				 * all the masks should be the same.
-				 */
-				ret = memcmp(&fdir_info->mask,
-					&fdir_rule.mask,
-					sizeof(struct ixgbe_hw_fdir_mask));
-				if (ret)
-					goto out;
-
-				if (fdir_rule.mask.flex_bytes_mask &&
-				    fdir_info->flex_bytes_offset !=
-				    fdir_rule.flex_bytes_offset)
-					goto out;
-			}
+		struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+		bool first_mask = false;
+
+		ret = ixgbe_fdir_flow_program(dev, adapter, &fdir_rule,
+			&first_mask, error);
+		if (ret)
+			goto out;
+
+		fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter",
+				sizeof(struct ixgbe_fdir_rule_ele), 0);
+		if (!fdir_rule_ptr) {
+			PMD_DRV_LOG(ERR, "failed to allocate memory");
+			goto out;
 		}
-
-		if (fdir_rule.b_spec) {
-			struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
-
-			ret = ixgbe_fdir_filter_program(adapter, fdir_conf,
-					&fdir_rule, FALSE, FALSE);
-			if (!ret) {
-				fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter",
-					sizeof(struct ixgbe_fdir_rule_ele), 0);
-				if (!fdir_rule_ptr) {
-					PMD_DRV_LOG(ERR, "failed to allocate memory");
-					goto out;
-				}
-				rte_memcpy(&fdir_rule_ptr->filter_info,
-					&fdir_rule,
-					sizeof(struct ixgbe_fdir_rule));
-				flow->rule = fdir_rule_ptr;
-				flow->filter_type = RTE_ETH_FILTER_FDIR;
-				fdir_info->n_flows++;
-
-				return flow;
-			}
-
-			if (ret) {
-				/**
-				 * clean the mask_added flag if fail to
-				 * program
-				 **/
-				if (first_mask)
-					fdir_info->mask_added = FALSE;
-				goto out;
-			}
+		/* update global state */
+		if (first_mask) {
+			fdir_info->mask_added = TRUE;
+			fdir_info->mask = fdir_rule.mask;
+			fdir_info->flex_bytes_offset = fdir_rule.flex_bytes_offset;
 		}
-
-		goto out;
+		fdir_info->n_flows++;
+		fdir_conf->mode = fdir_rule.mode;
+
+		rte_memcpy(&fdir_rule_ptr->filter_info,
+			&fdir_rule,
+			sizeof(struct ixgbe_fdir_rule));
+		flow->rule = fdir_rule_ptr;
+		flow->filter_type = RTE_ETH_FILTER_FDIR;
+		return flow;
 	}
 
 	memset(&l2_tn_filter, 0, sizeof(struct ixgbe_l2_tunnel_conf));
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v1 08/15] net/ixgbe: do not use flow list to count flows
  2026-04-30 11:14 ` [PATCH v1 08/15] net/ixgbe: do not use flow list to count flows Anatoly Burakov
@ 2026-05-06  9:24   ` Bruce Richardson
  0 siblings, 0 replies; 20+ messages in thread
From: Bruce Richardson @ 2026-05-06  9:24 UTC (permalink / raw)
  To: Anatoly Burakov; +Cc: dev, Vladimir Medvedkin

On Thu, Apr 30, 2026 at 12:14:37PM +0100, Anatoly Burakov wrote:
> Currently, FDIR code will use emptiness of its flow list as an indicator
> that there are no flows (and that we can install a mask). That usage is the
> only thing preventing us from getting rid of the FDIR flow list
> altogether, so introduce a new mechanism for flow count tracking.
> 
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>  drivers/net/intel/ixgbe/ixgbe_ethdev.c | 1 +
>  drivers/net/intel/ixgbe/ixgbe_ethdev.h | 1 +
>  drivers/net/intel/ixgbe/ixgbe_fdir.c   | 8 +++++---
>  drivers/net/intel/ixgbe/ixgbe_flow.c   | 3 ++-
>  4 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
> index 1c4a2e1177..ee1b499b49 100644
> --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
> @@ -1465,6 +1465,7 @@ static int ixgbe_fdir_filter_init(struct rte_eth_dev *eth_dev)
>  		rte_hash_free(fdir_info->hash_handle);
>  		return -ENOMEM;
>  	}
> +	fdir_info->n_flows = 0;
>  	fdir_info->mask_added = FALSE;
>  
>  	return 0;
> diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> index 2fb6d55387..6147cd6bdf 100644
> --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
> @@ -199,6 +199,7 @@ struct ixgbe_hw_fdir_info {
>  	struct ixgbe_fdir_filter **hash_map;
>  	struct rte_hash *hash_handle; /* cuckoo hash handler */
>  	bool mask_added; /* If already got mask from consistent filter */
> +	uint32_t    n_flows;
>  };
>  
Minor nit, for future reference: best to put this before the mask_added
field, so that we have the fields sorted for size and avoid gaps.
However, this struct already has gaps, and is not perf sensitive, so not a
big deal here. If you don't mind me doing so, I can reorder the new field
on apply.

/Bruce

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v1 00/15] IXGBE fixes and cleanups
  2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
                   ` (14 preceding siblings ...)
  2026-04-30 11:14 ` [PATCH v1 15/15] net/ixgbe: refactor flow creation Anatoly Burakov
@ 2026-05-06  9:27 ` Bruce Richardson
  2026-05-06 10:27   ` Bruce Richardson
  15 siblings, 1 reply; 20+ messages in thread
From: Bruce Richardson @ 2026-05-06  9:27 UTC (permalink / raw)
  To: Anatoly Burakov; +Cc: dev

On Thu, Apr 30, 2026 at 12:14:29PM +0100, Anatoly Burakov wrote:
> This patchset fixes a bunch of very old issues in flow management
> in IXGBE driver, such as:
> 
> - storing process-local pointers in shared memory
> - incorrect L4 protocol matching for FDIR
> - wrong handling of SCTP flow items
> - reading stale FDIR state after flow destroy/flush
> - storing all flows in global lists
> 
> In addition, some cleanup is also performed - refactors, moving
> things around to avoid accessing process-local state, and
> writing read-only values at init instead of deep in FDIR.
> 
> Finally, FDIR was also rejecting protocol-only matches for TCP
> and UDP, these are now supported.
> 
> Depends on flow dump patchset: https://patches.dpdk.org/project/dpdk/list/?series=38016
> 
> Anatoly Burakov (15):
>   net/ixgbe: fix flows not being scoped to port
>   net/ixgbe: fix shared PF pointer in representor
>   net/ixgbe: fix non-shared data in IPsec session
>   net/ixgbe: fix SCTP protocol-only flow parsing
>   net/ixgbe: fix L4 protocol mask handling
>   net/ixgbe: reset flow state on clear paths
>   net/ixgbe: store max VFs in adapter
>   net/ixgbe: do not use flow list to count flows
>   net/ixgbe: remove redundant flow tracking lists
>   net/ixgbe: reduce FDIR conf macro usage
>   net/ixgbe: use adapter in flow-related calls
>   net/ixgbe: support protocol-only TCP and UDP rules
>   net/ixgbe: write drop queue at init
>   net/ixgbe: rely less on global flow state
>   net/ixgbe: refactor flow creation
> 
Some good cleanups, thanks.

Series-Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v1 00/15] IXGBE fixes and cleanups
  2026-05-06  9:27 ` [PATCH v1 00/15] IXGBE fixes and cleanups Bruce Richardson
@ 2026-05-06 10:27   ` Bruce Richardson
  0 siblings, 0 replies; 20+ messages in thread
From: Bruce Richardson @ 2026-05-06 10:27 UTC (permalink / raw)
  To: Anatoly Burakov; +Cc: dev

On Wed, May 06, 2026 at 10:27:32AM +0100, Bruce Richardson wrote:
> On Thu, Apr 30, 2026 at 12:14:29PM +0100, Anatoly Burakov wrote:
> > This patchset fixes a bunch of very old issues in flow management
> > in IXGBE driver, such as:
> > 
> > - storing process-local pointers in shared memory
> > - incorrect L4 protocol matching for FDIR
> > - wrong handling of SCTP flow items
> > - reading stale FDIR state after flow destroy/flush
> > - storing all flows in global lists
> > 
> > In addition, some cleanup is also performed - refactors, moving
> > things around to avoid accessing process-local state, and
> > writing read-only values at init instead of deep in FDIR.
> > 
> > Finally, FDIR was also rejecting protocol-only matches for TCP
> > and UDP, these are now supported.
> > 
> > Depends on flow dump patchset: https://patches.dpdk.org/project/dpdk/list/?series=38016
> > 
> > Anatoly Burakov (15):
> >   net/ixgbe: fix flows not being scoped to port
> >   net/ixgbe: fix shared PF pointer in representor
> >   net/ixgbe: fix non-shared data in IPsec session
> >   net/ixgbe: fix SCTP protocol-only flow parsing
> >   net/ixgbe: fix L4 protocol mask handling
> >   net/ixgbe: reset flow state on clear paths
> >   net/ixgbe: store max VFs in adapter
> >   net/ixgbe: do not use flow list to count flows
> >   net/ixgbe: remove redundant flow tracking lists
> >   net/ixgbe: reduce FDIR conf macro usage
> >   net/ixgbe: use adapter in flow-related calls
> >   net/ixgbe: support protocol-only TCP and UDP rules
> >   net/ixgbe: write drop queue at init
> >   net/ixgbe: rely less on global flow state
> >   net/ixgbe: refactor flow creation
> > 
> Some good cleanups, thanks.
> 
> Series-Acked-by: Bruce Richardson <bruce.richardson@intel.com>

Series applied to dpdk-next-net-intel.

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v1 03/15] net/ixgbe: fix non-shared data in IPsec session
  2026-04-30 11:14 ` [PATCH v1 03/15] net/ixgbe: fix non-shared data in IPsec session Anatoly Burakov
@ 2026-05-07 10:50   ` Radu Nicolau
  0 siblings, 0 replies; 20+ messages in thread
From: Radu Nicolau @ 2026-05-07 10:50 UTC (permalink / raw)
  To: Anatoly Burakov, dev, Vladimir Medvedkin, Declan Doherty


On 30-Apr-26 12:14 PM, Anatoly Burakov wrote:
> Currently, ixgbe IPsec session private data stores an ethdev pointer.
> That pointer is process local, but the session private data is shared,
> so a secondary process can read an invalid pointer value.
>
> Fix this by storing ethdev data pointer in session private data instead,
> and using it for session/device binding checks and dev_private lookups
> when adding SAs.
>
> Fixes: 9a0752f498d2 ("net/ixgbe: enable inline IPsec")
> Cc: radu.nicolau@intel.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
Acked-by: Radu Nicolau <radu.nicolau@intel.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2026-05-07 10:50 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-30 11:14 [PATCH v1 00/15] IXGBE fixes and cleanups Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 01/15] net/ixgbe: fix flows not being scoped to port Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 02/15] net/ixgbe: fix shared PF pointer in representor Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 03/15] net/ixgbe: fix non-shared data in IPsec session Anatoly Burakov
2026-05-07 10:50   ` Radu Nicolau
2026-04-30 11:14 ` [PATCH v1 04/15] net/ixgbe: fix SCTP protocol-only flow parsing Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 05/15] net/ixgbe: fix L4 protocol mask handling Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 06/15] net/ixgbe: reset flow state on clear paths Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 07/15] net/ixgbe: store max VFs in adapter Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 08/15] net/ixgbe: do not use flow list to count flows Anatoly Burakov
2026-05-06  9:24   ` Bruce Richardson
2026-04-30 11:14 ` [PATCH v1 09/15] net/ixgbe: remove redundant flow tracking lists Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 10/15] net/ixgbe: reduce FDIR conf macro usage Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 11/15] net/ixgbe: use adapter in flow-related calls Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 12/15] net/ixgbe: support protocol-only TCP and UDP rules Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 13/15] net/ixgbe: write drop queue at init Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 14/15] net/ixgbe: rely less on global flow state Anatoly Burakov
2026-04-30 11:14 ` [PATCH v1 15/15] net/ixgbe: refactor flow creation Anatoly Burakov
2026-05-06  9:27 ` [PATCH v1 00/15] IXGBE fixes and cleanups Bruce Richardson
2026-05-06 10:27   ` Bruce Richardson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox