* [RFC PATCH v1 00/21] Building a better rte_flow parser
@ 2026-03-16 17:27 Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 01/21] ethdev: add flow graph API Anatoly Burakov
` (21 more replies)
0 siblings, 22 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev
Most rte_flow parsers in DPDK suffer from huge implementation complexity because
even though 99% of what people use rte_flow parsers for is parsing protocol
graphs, no parser is written explicitly as a graph. This patchset attempts to
suggest a viable model to build rte_flow parsers as graphs, by offering a
lightweight header only library to build rte_flow parsering graphs without too
much boilerplate and complexity.
Most of the patchset is about Intel drivers, but they are meant as
reimplementations as well as examples for the rest of the community to assess
how to build parsers using this new infrastructure. I expect the first two
patches will be of most interest to non-Intel reviewers, as they deal with
building two reusable parser architecture pieces.
The first piece is a new flow graph helper in ethdev. Its purpose is
deliberately narrow: it targets the protocol-graph part of rte_flow pattern
parsing, where drivers walk packet headers and validate legal item sequences and
parameters. That does not cover all possible rte_flow features, especially more
exotic flow items, but it does cover a large and widely shared part of what
existing drivers need to do. Or, to put it in other words, the only flow items
this infrastructure *doesn't* cover is things that do not lend themselves well
to be parsed as a graph of protocol headers (e.g. conntrack items). Everything
else should be covered or cover-able.
The second piece is a reusable flow engine framework for Intel Ethernet drivers.
This is kept Intel-local for now because I do not feel it is generic enough to
be presented as an ethdev-wide engine model. Even so, the intent is to establish
a cleaner parser architecture with a defined interaction model, explicit memory
ownership rules, and engine definitions that do not block secondary-process-safe
usage. It is my hope that we could also promote something like this into ethdev
proper and remove the necessity for drivers to build so much boilerplate around
rte_flow parsing (and more often than not doing it in a way that is more complex
than it needs to be).
Most of the rest of the series is parser reimplementation, but that is mainly
the vehicle for demonstrating and validating those two pieces. ixgbe and i40e
are wired into the new common parsing path, and their existing parsers are
migrated incrementally to the graph-based model. Besides reducing ad hoc parser
code, this also makes validation more explicit and more consistent with the
actual install path. In a few places that means invalid inputs that were
previously ignored, deferred, or interpreted loosely are now rejected earlier
and more strictly, without any increase in code complexity (in fact, with marked
*decrease* of it!).
Series depends on previously submitted patchsets:
- IAVF global buffer fix [1]
- Common attr parsing stuff [2]
[1] https://patches.dpdk.org/project/dpdk/list/?series=37585&state=*
[2] https://patches.dpdk.org/project/dpdk/list/?series=37663&state=*
Anatoly Burakov (21):
ethdev: add flow graph API
net/intel/common: add flow engines infrastructure
net/intel/common: add utility functions
net/ixgbe: add support for common flow parsing
net/ixgbe: reimplement ethertype parser
net/ixgbe: reimplement syn parser
net/ixgbe: reimplement L2 tunnel parser
net/ixgbe: reimplement ntuple parser
net/ixgbe: reimplement security parser
net/ixgbe: reimplement FDIR parser
net/ixgbe: reimplement hash parser
net/i40e: add support for common flow parsing
net/i40e: reimplement ethertype parser
net/i40e: reimplement FDIR parser
net/i40e: reimplement tunnel QinQ parser
net/i40e: reimplement VXLAN parser
net/i40e: reimplement NVGRE parser
net/i40e: reimplement MPLS parser
net/i40e: reimplement gtp parser
net/i40e: reimplement L4 cloud parser
net/i40e: reimplement hash parser
drivers/net/intel/common/flow_engine.h | 1003 ++++
drivers/net/intel/common/flow_util.h | 165 +
drivers/net/intel/i40e/i40e_ethdev.c | 56 +-
drivers/net/intel/i40e/i40e_ethdev.h | 49 +-
drivers/net/intel/i40e/i40e_fdir.c | 47 -
drivers/net/intel/i40e/i40e_flow.c | 4092 +----------------
drivers/net/intel/i40e/i40e_flow.h | 44 +
drivers/net/intel/i40e/i40e_flow_ethertype.c | 258 ++
drivers/net/intel/i40e/i40e_flow_fdir.c | 1806 ++++++++
drivers/net/intel/i40e/i40e_flow_hash.c | 1289 ++++++
drivers/net/intel/i40e/i40e_flow_tunnel.c | 1510 ++++++
drivers/net/intel/i40e/i40e_hash.c | 980 +---
drivers/net/intel/i40e/i40e_hash.h | 8 +-
drivers/net/intel/i40e/meson.build | 4 +
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 40 +-
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 13 +-
drivers/net/intel/ixgbe/ixgbe_fdir.c | 13 +-
drivers/net/intel/ixgbe/ixgbe_flow.c | 3130 +------------
drivers/net/intel/ixgbe/ixgbe_flow.h | 38 +
.../net/intel/ixgbe/ixgbe_flow_ethertype.c | 240 +
drivers/net/intel/ixgbe/ixgbe_flow_fdir.c | 1510 ++++++
drivers/net/intel/ixgbe/ixgbe_flow_hash.c | 182 +
drivers/net/intel/ixgbe/ixgbe_flow_l2tun.c | 228 +
drivers/net/intel/ixgbe/ixgbe_flow_ntuple.c | 483 ++
drivers/net/intel/ixgbe/ixgbe_flow_security.c | 297 ++
drivers/net/intel/ixgbe/ixgbe_flow_syn.c | 280 ++
drivers/net/intel/ixgbe/meson.build | 7 +
lib/ethdev/meson.build | 1 +
lib/ethdev/rte_flow_graph.h | 414 ++
29 files changed, 9867 insertions(+), 8320 deletions(-)
create mode 100644 drivers/net/intel/common/flow_engine.h
create mode 100644 drivers/net/intel/common/flow_util.h
create mode 100644 drivers/net/intel/i40e/i40e_flow.h
create mode 100644 drivers/net/intel/i40e/i40e_flow_ethertype.c
create mode 100644 drivers/net/intel/i40e/i40e_flow_fdir.c
create mode 100644 drivers/net/intel/i40e/i40e_flow_hash.c
create mode 100644 drivers/net/intel/i40e/i40e_flow_tunnel.c
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow.h
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_ethertype.c
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_fdir.c
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_hash.c
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_l2tun.c
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_ntuple.c
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_security.c
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_syn.c
create mode 100644 lib/ethdev/rte_flow_graph.h
--
2.47.3
^ permalink raw reply [flat|nested] 23+ messages in thread
* [RFC PATCH v1 01/21] ethdev: add flow graph API
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 02/21] net/intel/common: add flow engines infrastructure Anatoly Burakov
` (20 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Thomas Monjalon, Andrew Rybchenko, Ori Kam
This commit adds a flow graph parsing API. This is a helper API intended to
help ethdev drivers implement rte_flow parsers, as common usages map to
graph traversal problem very well.
Features provided by the API:
- Flow graph, edge, and node definitions
- Graph traversal logic
- Declarative validation against common flow item types
- Per-node validation and state processing callbacks
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Depends-on: series-37663 ("Add common flow attr/action parsing infrastructure to Intel PMD's")
Depends-on: series-37585 ("Reduce reliance on global response buffer in IAVF")
lib/ethdev/meson.build | 1 +
lib/ethdev/rte_flow_graph.h | 414 ++++++++++++++++++++++++++++++++++++
2 files changed, 415 insertions(+)
create mode 100644 lib/ethdev/rte_flow_graph.h
diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build
index 8ba6c708a2..686b64e3c2 100644
--- a/lib/ethdev/meson.build
+++ b/lib/ethdev/meson.build
@@ -40,6 +40,7 @@ driver_sdk_headers += files(
'ethdev_pci.h',
'ethdev_vdev.h',
'rte_flow_driver.h',
+ 'rte_flow_graph.h',
'rte_mtr_driver.h',
'rte_tm_driver.h',
)
diff --git a/lib/ethdev/rte_flow_graph.h b/lib/ethdev/rte_flow_graph.h
new file mode 100644
index 0000000000..07c370b45a
--- /dev/null
+++ b/lib/ethdev/rte_flow_graph.h
@@ -0,0 +1,414 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Intel Corporation
+ */
+
+#ifndef _RTE_FLOW_GRAPH_H_
+#define _RTE_FLOW_GRAPH_H_
+
+/**
+ * @file
+ * RTE Flow Graph Parser (Internal Driver API)
+ *
+ * This file provides a graph-based flow pattern parser for PMD drivers.
+ * It defines structures and functions to validate and process rte_flow
+ * patterns using a directed graph representation.
+ *
+ * @warning
+ * This is an internal API for PMD drivers only. Applications must not use it.
+ */
+
+#include <rte_flow.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_FLOW_NODE_FIRST (0)
+/* Edge array termination sentinel (not a valid node index). */
+#define RTE_FLOW_NODE_EDGE_END ((size_t)~0U)
+
+/**
+ * For a lot of nodes, there are multiple common patterns of validation behavior. This enum allows
+ * marking nodes as implementing one of these common behaviors without need for expressing that in
+ * validation code. Can be ORed together to express support for multiple node types. These checks
+ * are not combined (any one of them being satisfied is sufficient).
+ */
+enum rte_flow_graph_node_expect
+{
+ RTE_FLOW_NODE_EXPECT_NONE = 0, /**< No special constraints. */
+ RTE_FLOW_NODE_EXPECT_EMPTY = (1 << 0), /**< spec, mask, last must be NULL. */
+ RTE_FLOW_NODE_EXPECT_SPEC = (1 << 1), /**< spec is required, mask and last must be NULL. */
+ RTE_FLOW_NODE_EXPECT_MASK = (1 << 2), /**< mask is required, spec and last must be NULL. */
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK = (1 << 3), /**< spec and mask required, last must be NULL. */
+ RTE_FLOW_NODE_EXPECT_RANGE = (1 << 4), /**< spec, mask, and last are required. */
+ RTE_FLOW_NODE_EXPECT_NOT_RANGE = (1 << 5), /**< last must be NULL. */
+};
+
+/**
+ * Node validation callback.
+ *
+ * Called when the graph traversal reaches this node. Validates the
+ * rte_flow_item (spec, mask, last) against driver-specific constraints.
+ *
+ * Drivers are suggested to perform all checks in this callback.
+ *
+ * @param ctx
+ * Opaque driver context for accumulating parsed state.
+ * @param item
+ * Pointer to the rte_flow_item being validated.
+ * @param error
+ * Pointer to rte_flow_error structure for reporting failures.
+ * @return
+ * 0 on success, negative errno on failure.
+ */
+typedef int (*rte_flow_node_validate_fn)(
+ const void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error);
+
+/**
+ * Node processing callback.
+ *
+ * Called after validation succeeds. Extracts fields from the rte_flow_item
+ * and stores them in driver-specific state for later hardware programming.
+ *
+ * Drivers are suggested to implement "happy path" in this callback.
+ *
+ * @param ctx
+ * Opaque driver context for accumulating parsed state.
+ * @param item
+ * Pointer to the rte_flow_item to process.
+ * @param error
+ * Pointer to rte_flow_error structure for reporting failures.
+ * @return
+ * 0 on success, negative errno on failure.
+ */
+typedef int (*rte_flow_node_process_fn)(
+ void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error);
+
+/**
+ * Graph node definition.
+ *
+ * When all members are NULL, it is assumed that the node is not used.
+ */
+struct rte_flow_graph_node {
+ const char *name; /**< Node name. */
+ const enum rte_flow_item_type type; /**< Corresponding rte_flow_item_type. */
+ const enum rte_flow_graph_node_expect constraints; /**< Common validation constraints (ORed). */
+ rte_flow_node_validate_fn validate; /**< Validation callback (NULL if unsupported). */
+ rte_flow_node_process_fn process; /**< Processing callback (NULL if no extraction needed). */
+};
+
+/**
+ * Graph edge definition.
+ *
+ * Describes allowed transitions from one node to others. The 'next' array
+ * lists all valid successor node types and is terminated by RTE_FLOW_EDGE_END.
+ * Drivers define edges to express their supported protocol sequences.
+ */
+struct rte_flow_graph_edge {
+ const size_t *next; /**< Array of valid successor nodes, terminated by RTE_FLOW_EDGE_END. */
+};
+
+/**
+ * Flow graph to be implemented by drivers.
+ */
+struct rte_flow_graph {
+ struct rte_flow_graph_node *nodes;
+ struct rte_flow_graph_edge *edges;
+ const enum rte_flow_item_type *ignore_nodes; /**< Additional node types to ignore, terminated by RTE_FLOW_ITEM_TYPE_END. */
+};
+
+static inline bool
+__flow_graph_node_check_constraint(enum rte_flow_graph_node_expect c,
+ bool has_spec, bool has_mask, bool has_last)
+{
+ bool empty = !has_spec && !has_mask && !has_last;
+
+ if ((c & RTE_FLOW_NODE_EXPECT_EMPTY) && empty)
+ return true;
+ if ((c & RTE_FLOW_NODE_EXPECT_NOT_RANGE) && !has_last)
+ return true;
+ if ((c & RTE_FLOW_NODE_EXPECT_SPEC) && has_spec && !has_mask && !has_last)
+ return true;
+ if ((c & RTE_FLOW_NODE_EXPECT_MASK) && has_mask && !has_spec && !has_last)
+ return true;
+ if ((c & RTE_FLOW_NODE_EXPECT_SPEC_MASK) && has_spec && has_mask && !has_last)
+ return true;
+ if ((c & RTE_FLOW_NODE_EXPECT_RANGE) && has_mask && has_spec && has_last)
+ return true;
+
+ return false;
+}
+
+static inline bool
+__flow_graph_node_is_expected(const struct rte_flow_graph_node *node,
+ const struct rte_flow_item *item, struct rte_flow_error *error)
+{
+ enum rte_flow_graph_node_expect c = node->constraints;
+
+ if (c == RTE_FLOW_NODE_EXPECT_NONE)
+ return true;
+
+ bool has_spec = (item->spec != NULL);
+ bool has_mask = (item->mask != NULL);
+ bool has_last = (item->last != NULL);
+
+ if (__flow_graph_node_check_constraint(c, has_spec, has_mask, has_last))
+ return true;
+
+ /*
+ * In the interest of everyone debugging flow parsing code, we should provide the user with
+ * meaningful messages about exactly what failed, as no one likes non-descript "node
+ * constraints not met" errors with no clear indication of where this is even coming from.
+ * What follows is us building said meaningful error messages. It's a bit ugly, but it is
+ * for the greater good.
+ */
+ const char *msg;
+
+ /* for empty items, we know exactly what went wrong */
+ if (c == RTE_FLOW_NODE_EXPECT_EMPTY) {
+ if (has_spec)
+ msg = "Unexpected spec in flow item";
+ else if (has_mask)
+ msg = "Unexpected mask in flow item";
+ else /* has_last */
+ msg = "Unexpected last in flow item";
+ } else {
+ /*
+ * for non-empty constraints, we need to figure out the one thing user is missing
+ * (or has extra) that would've satisfied the constraints.
+ * We do that by flipping each presence bit in turn and seeing whether that single
+ * change would have satisfied the node constraints.
+ */
+
+ /* check spec first */
+ if (!has_spec && __flow_graph_node_check_constraint(c, true, has_mask, has_last)) {
+ msg = "Missing spec in flow item";
+ } else if (has_spec && __flow_graph_node_check_constraint(c, false, has_mask, has_last)) {
+ msg = "Unexpected spec in flow item";
+ }
+ /* check mask next */
+ else if (!has_mask && __flow_graph_node_check_constraint(c, has_spec, true, has_last)) {
+ msg = "Missing mask in flow item";
+ } else if (has_mask && __flow_graph_node_check_constraint(c, has_spec, false, has_last)) {
+ msg = "Unexpected mask in flow item";
+ }
+ /* finally, check range */
+ else if (!has_last && __flow_graph_node_check_constraint(c, has_spec, has_mask, true)) {
+ msg = "Missing last in flow item";
+ } else if (has_last && __flow_graph_node_check_constraint(c, has_spec, has_mask, false)) {
+ msg = "Unexpected last in flow item";
+ /* multiple things are wrong with the constraint, so just output a generic error */
+ } else {
+ msg = "Flow item does not meet node constraints";
+ }
+ }
+
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, msg);
+
+ return false;
+}
+
+/**
+ * @internal
+ * Check if a graph node is null/unused.
+ *
+ * Valid graph nodes must at least define a name.
+ */
+__rte_internal
+static inline bool
+__flow_graph_node_is_null(const struct rte_flow_graph_node *node)
+{
+ return node->name == NULL && node->type == RTE_FLOW_ITEM_TYPE_END &&
+ node->process == NULL && node->validate == NULL;
+}
+
+/**
+ * @internal
+ * Check if a flow item type should be ignored by the graph.
+ *
+ * Checks if the item type is in the graph's ignore list.
+ */
+__rte_internal
+static inline bool
+__flow_graph_node_is_ignored(const struct rte_flow_graph *graph,
+ enum rte_flow_item_type fi_type)
+{
+ const enum rte_flow_item_type *ignored;
+
+ /* Always skip VOID items */
+ if (fi_type == RTE_FLOW_ITEM_TYPE_VOID)
+ return true;
+
+ if (graph->ignore_nodes == NULL)
+ return false;
+
+ for (ignored = graph->ignore_nodes; *ignored != RTE_FLOW_ITEM_TYPE_END; ignored++) {
+ if (*ignored == fi_type)
+ return true;
+ }
+
+ return false;
+}
+
+/**
+ * @internal
+ * Get the index of a node within a graph.
+ */
+__rte_internal
+static inline size_t
+__flow_graph_get_node_index(const struct rte_flow_graph *graph, const struct rte_flow_graph_node *node)
+{
+ return (size_t)(node - graph->nodes);
+}
+
+/**
+ * @internal
+ * Find the next node in the graph matching the given item type.
+ */
+__rte_internal
+static inline const struct rte_flow_graph_node *
+__flow_graph_find_next_node(const struct rte_flow_graph *graph,
+ const struct rte_flow_graph_node *cur_node,
+ enum rte_flow_item_type next_type)
+{
+ const size_t *next_nodes;
+ size_t cur_idx, edge_idx;
+
+ cur_idx = __flow_graph_get_node_index(graph, cur_node);
+ next_nodes = graph->edges[cur_idx].next;
+ if (next_nodes == NULL)
+ return NULL;
+
+ for (edge_idx = 0; next_nodes[edge_idx] != RTE_FLOW_NODE_EDGE_END; edge_idx++) {
+ const struct rte_flow_graph_node *tmp =
+ &graph->nodes[next_nodes[edge_idx]];
+ if (__flow_graph_node_is_null(tmp))
+ continue;
+ if (tmp->type == next_type)
+ return tmp;
+ }
+
+ return NULL;
+}
+
+/**
+ * @internal
+ * Visit (validate and extract) a node's item.
+ */
+__rte_internal
+static inline int
+__flow_graph_visit_node(const struct rte_flow_graph_node *node, void *ctx,
+ const struct rte_flow_item *item, struct rte_flow_error *error)
+{
+ int ret;
+
+ /* if we expect a certain type of node, check for it */
+ if (!__flow_graph_node_is_expected(node, item, error))
+ /* error already set */
+ return -1;
+
+ /* Does this node fit driver's criteria? */
+ if (node->validate != NULL) {
+ ret = node->validate(ctx, item, error);
+ if (ret != 0)
+ return ret;
+ }
+
+ /* Extract data from this item */
+ if (node->process != NULL) {
+ ret = node->process(ctx, item, error);
+ if (ret != 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * @internal
+ * Parse and validate a flow pattern using the flow graph.
+ *
+ * Traverses the pattern items and validates them against the driver's graph
+ * structure. For each item, checks that the transition from the current node
+ * is allowed, then invokes validation and processing callbacks.
+ *
+ * @param graph
+ * Pointer to the driver's flow graph definition with nodes and edges.
+ * @param pattern
+ * Array of rte_flow_item structures to parse, terminated by RTE_FLOW_ITEM_TYPE_END.
+ * @param error
+ * Pointer to rte_flow_error structure for reporting failures.
+ * @param ctx
+ * Opaque driver context for accumulating parsed state.
+ * @return
+ * 0 on success, negative errno on failure (error is set).
+ */
+__rte_internal
+static inline int
+rte_flow_graph_parse(const struct rte_flow_graph *graph, const struct rte_flow_item *pattern,
+ struct rte_flow_error *error, void *ctx)
+{
+ const struct rte_flow_graph_node *cur_node;
+ const struct rte_flow_item *item;
+ int ret;
+
+ if (graph == NULL || graph->nodes == NULL || graph->edges == NULL) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Flow graph is not defined");
+ }
+ if (pattern == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Flow pattern is NULL");
+ }
+
+ /* process start node */
+ cur_node = &graph->nodes[RTE_FLOW_NODE_FIRST];
+ ret = __flow_graph_visit_node(cur_node, ctx, NULL, error);
+ if (ret != 0)
+ return ret;
+
+ /* Traverse pattern items */
+ for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+
+ /* Skip items in the graph's ignore list */
+ if (__flow_graph_node_is_ignored(graph, item->type))
+ continue;
+
+ /* Find the next graph node for this item type */
+ cur_node = __flow_graph_find_next_node(graph, cur_node, item->type);
+ if (cur_node == NULL) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Pattern item not supported");
+ }
+ /* Validate and process the current item at this node */
+ ret = __flow_graph_visit_node(cur_node, ctx, item, error);
+ if (ret != 0)
+ return ret;
+ }
+
+ /* Pattern items have ended but we still need to process the end */
+ cur_node = __flow_graph_find_next_node(graph, cur_node, RTE_FLOW_ITEM_TYPE_END);
+ if (cur_node == NULL) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Pattern item not supported");
+ }
+ ret = __flow_graph_visit_node(cur_node, ctx, item, error);
+ if (ret != 0)
+ return ret;
+
+ return 0;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_FLOW_GRAPH_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 02/21] net/intel/common: add flow engines infrastructure
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 01/21] ethdev: add flow graph API Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 03/21] net/intel/common: add utility functions Anatoly Burakov
` (19 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Current implementation of flow engines in various drivers have a few issues
that need to be corrected.
For one, some of the are fundamentally incompatible with secondary
processes, because the flow engine registration and creation will
allocate structures in shared memory but use process-local pointers to
point to flow engines and pattern tables.
For another, a lot of them are needlessly complicated and rely on a
separation between patterns and parsing that is hard to reason about and
maintain: they do not define memory ownership model, they do not define the
way in which we approach parameter and pattern parsing, and they
occasionally do weird things like passing around pointers-to-void-pointers
or even using pointers as integer values.
These issues can be corrected, but because of how much code there is to the
current infrastructure and how tightly coupled it is, it would be easier to
just build new one from scratch, and gradually migrate all engines to use
it. This patch is intended as a first step towards that goal, and defines
both common data types to be used by all rte_flow parsers, as well as the
interaction model that is to be followed by all drivers.
We define a set of structures that will represent:
- Defined rte_flow parsing interaction model and code flow (ops struct)
- Defined memory allocation and ownership model for all engines
- Scratch space format for all engines (variably allocated typed struct)
- Flow rule format for all engines (variably allocated typed struct)
- Engine definitions that are compatible with secondary process model
- Reference implementations of common rte_flow operations
- Various supporting infrastructure for parser customization, e.g. hooks
- Support for using custom allocation (e.g. for mempool-based alloc)
The design intent is heavily documented right inside the header and is to
be considered authoritative design document for how to build rte_flow
parsers for Intel Ethernet drivers going forward.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/common/flow_engine.h | 1003 ++++++++++++++++++++++++
1 file changed, 1003 insertions(+)
create mode 100644 drivers/net/intel/common/flow_engine.h
diff --git a/drivers/net/intel/common/flow_engine.h b/drivers/net/intel/common/flow_engine.h
new file mode 100644
index 0000000000..14feabb3ce
--- /dev/null
+++ b/drivers/net/intel/common/flow_engine.h
@@ -0,0 +1,1003 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#ifndef _COMMON_INTEL_FLOW_ENGINE_H_
+#define _COMMON_INTEL_FLOW_ENGINE_H_
+
+#include <stddef.h>
+#include <sys/queue.h>
+
+#include <rte_malloc.h>
+
+#include <rte_flow.h>
+#include <rte_flow_graph.h>
+#include <rte_tailq.h>
+#include <rte_rwlock.h>
+
+/*
+ * This is a common header for Intel Ethernet drivers' flow engine
+ * implementations. It defines the interfaces and data structures required to
+ * implement flow rule engines that can be plugged into the drivers' flow
+ * handling logic.
+ *
+ * Design considerations:
+ *
+ * 1. Ease of implementation
+ *
+ * The flow engine interface is designed to be as simple as possible with
+ * obvious defaults (i.e. not specifying something leads to behavior that
+ * would've been the most obvious in context). The point is not to produce a
+ * monstrous driver-within-a-driver framework, but rather to make engine
+ * definitions follow semantic expectations of what the engine actually does.
+ *
+ * All the boilerplate (flow management, engine enablement tracking, etc.) is
+ * handled by the common flow infrastructure, so the engine implementation only
+ * needs to focus on the actual logic of parsing and installing/uninstalling
+ * flow rules, and defining each step of the process as it pertains to each flow
+ * engine.
+ *
+ * It is expected that drivers will use other utility functions from the common
+ * flow-related code where applicable (e.g. flow_util.h, flow_check.h, etc.),
+ * however this is obviously up to each individual driver to handle.
+ *
+ * Default implementations for rte_flow API functions are also provided, but
+ * they are not mandatory to use - drivers may implement their own versions if
+ * they so choose, this is just a reference implementation.
+ *
+ * 2. Full secondary process compatibility
+ *
+ * In order to support rte_flow operations in secondary processes, we need to
+ * store which engines are enabled for particular driver instance, and resolve
+ * them at runtime. Therefore, instead of relying on function pointers, each
+ * engine is expected to define an enum of engine type, which is then used as a
+ * bitshift-mask into a driver-specific 64-bit field of enabled engines. This
+ * way, the engine definitions can be stored in read-only memory, and referenced
+ * by both primary and secondary processes without issues.
+ *
+ * Note that this does not imply that all drivers are therefore able to support
+ * rte_flow-related operations in secondary processes - that is still up to each
+ * driver to implement. This just ensures that the flow engine framework does
+ * not prevent it.
+ *
+ * 3. No memory management by engines
+ *
+ * Engines are expected to only fill in the provided memory areas, and not
+ * allocate or free any memory on their own. The only exception is per-engine
+ * internal data where the engine is free to alloc/free any additional resources
+ * on init/uninit, as this cannot be reasonably generalized by the framework.
+ *
+ * 4. All rte_flow pattern parsing is implemented using rte_flow_graph
+ *
+ * The flow engine framework is designed to work hand-in-hand with the
+ * rte_flow_graph parsing infrastructure. Each engine may provide a pattern
+ * graph that is used to match the flow pattern, and extract relevant data
+ * into the engine context. This allows for cleaner separation of concerns,
+ * where the engine focuses on handling actions and attributes, while the
+ * graph parser deals with the pattern matching.
+ *
+ * If the graph is not provided, a default empty pattern graph is used that
+ * matches either empty patterns or patterns consisting solely of "any" items.
+ */
+
+/* forward declarations for flow engine data types */
+struct ci_flow_engine_ops;
+struct ci_flow_engine_ctx;
+struct ci_flow_engine;
+struct ci_flow_engine_list;
+struct ci_flow_engine_conf;
+struct ci_flow;
+
+/*
+ * Flow engine ops.
+ *
+ * Each flow engine must provide a set of operations to handle common operations,
+ * such as:
+ *
+ * - Check whether the engine is available for use (is_available)
+ * - Initialize and clean up engine resources (init/uninit)
+ * - Allocate memory for flow rules (flow_alloc)
+ * - Parse flow attributes and actions into engine-specific context (ctx_parse)
+ * - Pattern graph to use when parsing flow patterns (graph)
+ * - Validate the parsed attributes and actions against the data parsed from pattern (ctx_validate)
+ * - Build the actual flow rule structure from the parsed context (ctx_to_flow)
+ * - Add/remove the flow rule to/from hardware or driver's internal state (flow_install/flow_uninstall)
+ * - Query data for the flow rule (flow_query)
+ *
+ * The intended flow and semantics is as follows:
+ *
+ * - at init time:
+ * [is_available] -> [init]
+ *
+ * - at rte_flow_validate time:
+ * ctx_parse -> graph parser -> [ctx_validate] -> [ctx_to_flow]
+ *
+ * - at rte_flow_create time:
+ * [flow_alloc] -> ctx_parse -> graph parser -> [ctx_validate] -> [ctx_to_flow -> [flow_install]]
+ *
+ * - at rte_flow_destroy/rte_flow_flush time:
+ * [flow_uninstall] -> [flow_free]
+ *
+ * - at rte_flow_query time:
+ * [flow_query]
+ *
+ * The engine availability must be checked by the driver at init time. The exact
+ * mechanics of this is left up to each individual driver - it may be hardware
+ * capability bits, PHY type check, devargs, or any other criteria that makes
+ * sense in the context of driver/adapter. If the availability callback is not
+ * implemented, the engine is assumed to be always available.
+ *
+ * The ctx_parse acts as the main gateway to parse flow actions and attributes,
+ * and so is mandatory. It is expected to fill the context structure.
+
+ * The init/uninit are optional resource lifecycle callbacks. If `priv_size` is
+ * non-zero, the framework allocates a zeroed private memory block and passes
+ * it to init/uninit.
+ *
+ * The flow_alloc is an optional allocator callback - if it is not defined, the
+ * engine will use rte_zmalloc with the provided flow_size to allocate memory
+ * for a flow rule. This callback will be useful for drivers who wish to e.g.
+ * allocate flow rules from a mempool, static memory, or any other custom memory
+ * management scheme. If allocation fails, fallback to default allocator will be
+ * used.
+ *
+ * Custom allocators only own their own engine-specific fields.
+ * The ci_flow common fields (engine_type, fallback_alloc, dev, node) are
+ * owned by the framework and will be initialised after the callback returns.
+ *
+ * The graph parser will take in the actual flow pattern, match it against the
+ * pattern graph, and put more data into the context structure. The engine may
+ * not provide a graph, in which case a default pattern graph is provided that
+ * will match the following:
+ *
+ * - empty patterns (start -> end)
+ * - "any" patterns (start -> any -> end)
+ *
+ * The ctx_validate is meant to perform any final checks on incongruity between
+ * the context data parsed from actions/attributes and the pattern graph. The
+ * engine may not provide this function, it is there purely to provide an avenue
+ * for cleaner logic separation where it makes sense.
+ *
+ * The ctx_to_flow uses the context data to fill in the actual flow structure.
+ *
+ * Finally, flow_install/flow_uninstall are meant to install/uninstall the flow
+ * and modify driver's internal structures and/or hardware state. The engine
+ * may not provide these functions if no internal state modification beyond
+ * storing the rule is required.
+ *
+ * The flow_free is an optional deallocator callback - if it is not defined,
+ * the engine will use rte_free to free memory for a flow rule. This callback
+ * must be present if flow_alloc is present.
+ *
+ * For querying data, the flow_query function is provided to query data for the
+ * flow rule. The engine may not provide this function, in which case
+ * any attempt to query the rule will result in failure when using reference
+ * implementation.
+ *
+ * If custom implementation for parts of the engine is used, take care to lock
+ * the config at appropriate times.
+ */
+struct ci_flow_engine_ops {
+ /* check whether engine is available - can be NULL */
+ bool (*is_available)(const struct ci_flow_engine *engine, const struct rte_eth_dev *dev);
+ /* init callback for engine-scoped resources - can be NULL */
+ int (*init)(const struct ci_flow_engine *engine,
+ struct rte_eth_dev *dev,
+ void *priv);
+ /* uninit callback for engine-scoped resources - can be NULL */
+ void (*uninit)(const struct ci_flow_engine *engine, void *priv);
+ /* allocation callback for flow rules - can be NULL */
+ struct ci_flow *(*flow_alloc)(const struct ci_flow_engine *engine, struct rte_eth_dev *dev);
+ /* deallocation callback for flow rules - can be NULL */
+ void (*flow_free)(struct ci_flow *flow);
+ /* initialize engine context from flow attr/actions - mandatory */
+ int (*ctx_parse)(const struct rte_flow_action actions[],
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error);
+ /* final pass before converting context to flow - can be NULL */
+ int (*ctx_validate)(struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error);
+ /* initialize flow rule from parsed context - can be NULL */
+ int (*ctx_to_flow)(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error);
+ /* install a flow rule - can be NULL */
+ int (*flow_install)(struct ci_flow *flow,
+ struct rte_flow_error *error);
+ /* uninstall a flow rule - can be NULL */
+ int (*flow_uninstall)(struct ci_flow *flow,
+ struct rte_flow_error *error);
+ /* query flow - can be NULL */
+ int (*flow_query)(struct ci_flow *flow,
+ const struct rte_flow_action *action,
+ void *data,
+ struct rte_flow_error *error);
+};
+
+/*
+ * common definition for flow engine context.
+ * each engine will define its own context structure that
+ * *must* start with this base structure.
+ */
+struct ci_flow_engine_ctx {
+ /* ethernet device this context belongs to */
+ struct rte_eth_dev *dev;
+};
+
+/*
+ * Common definition for flow rule.
+ *
+ * For flow rules, there are three parts to consider:
+ *
+ * 1) Common data
+ * 2) Driver-specific data
+ * 3) Engine-specific data
+ *
+ * The common data is defined here as the `ci_flow` structure. It contains
+ * fields that are common to all flow rules, regardless of driver or engine.
+ * This includes a linked list node for managing flow rules in a list, a pointer
+ * to the device (driver instance) the flow belongs to, and the engine type
+ * that created the flow.
+ *
+ * With rte_flow API, each driver is meant to define its own rte_flow structure
+ * that contains driver-specific data. This structure must start with the
+ * `ci_flow` structure defined here, followed by driver-specific fields.
+ *
+ * Additionally, each *engine* may want to define its own flow rule structure
+ * that contains actual engine-specific data. This structure must start with the
+ * driver-wide `rte_flow` structure such that it contains everything before it,
+ * followed by engine-specific fields.
+ *
+ * IMPORTANT:
+ *
+ * All of these structures will be referred to by the same pointer and can be
+ * freely (and safely) cast between each other *as long as* each structure
+ * definition has the parent structure as its first member. E.g. the common flow
+ * struct is `ci_flow`, and the driver-specific `rte_flow` must be defined as
+ * follows:
+ *
+ * struct rte_flow {
+ * struct ci_flow base;
+ * ...any driver-specific fields...
+ * }
+ *
+ * If the engine needs to define its own flow structure, in turn it should be
+ * defined as follows:
+ *
+ * struct ixgbe_fdir_flow {
+ * struct rte_flow base;
+ * ...any engine-specific fields...
+ * }
+ *
+ * This ensures pointer conversion safety between all three types:
+ *
+ * struct ci_flow *flow = ...;
+ * struct rte_flow *rte_flow = (struct rte_flow *)flow;
+ * struct ixgbe_fdir_flow *fdir_flow = (struct ixgbe_fdir_flow *)flow;
+ *
+ * The engine structure provides a `flow_size` field that indicates how much
+ * memory is required for a particular engine's flow structure. The driver must
+ * provide that value for each engine, as it will be used to size flow structure
+ * allocations. If the engine does not require any memory beyond the `rte_flow`
+ * structure, this value should be set to `sizeof(rte_flow)` for those engines.
+ */
+struct ci_flow {
+ TAILQ_ENTRY(ci_flow) node;
+ /* device this flow belongs to */
+ struct rte_eth_dev *dev;
+ /* engine this flow was created by */
+ size_t engine_type;
+ /* if the engine has custom allocator but fallback was used to allocate */
+ bool fallback_alloc;
+};
+
+/* flow engine definition */
+struct ci_flow_engine {
+ /* engine name */
+ const char *name;
+ /* engine type */
+ size_t type;
+ /* size of scratch space structure, can be 0 */
+ size_t ctx_size;
+ /* size of flow rule structure, must not be 0 */
+ size_t flow_size;
+ /* size of per-device engine private data, can be 0 */
+ size_t priv_size;
+ /* ops for this flow engine */
+ const struct ci_flow_engine_ops *ops;
+ /* pattern graph this engine supports - can be NULL to match empty patterns */
+ const struct rte_flow_graph *graph;
+};
+
+/* maximum number of engines is 63 + sentinel (NULL pointer) */
+#define CI_FLOW_ENGINE_MAX 62
+#define CI_FLOW_ENGINE_LIST_SIZE 64
+
+/* flow engine list definition */
+struct ci_flow_engine_list {
+ /* NULL-terminated array of flow engine pointers */
+ const struct ci_flow_engine *engines[CI_FLOW_ENGINE_LIST_SIZE];
+};
+
+/* flow engine configuration - each device must have its own instance */
+struct ci_flow_engine_conf {
+ /* lock to protect config */
+ rte_rwlock_t config_lock;
+ /* list of flows created on this device */
+ TAILQ_HEAD(ci_flow_list, ci_flow) flows;
+ /* bitmask of enabled engines */
+ uint64_t enabled_engines;
+ /* back-reference to device structure */
+ struct rte_eth_dev *dev;
+ /* per-engine private data pointers, indexed by engine type */
+ void *engine_priv[CI_FLOW_ENGINE_LIST_SIZE];
+};
+
+/* helper macro to iterate over list of engines */
+#define CI_FLOW_ENGINE_LIST_FOREACH(engine_ptr, engine_list) \
+ for (size_t __i = 0; \
+ __i < CI_FLOW_ENGINE_MAX && \
+ ((engine_ptr) = (engine_list)->engines[__i]) != NULL; \
+ __i++)
+
+/* basic checks for flow engine validity */
+static inline bool
+ci_flow_engine_is_valid(const struct ci_flow_engine *engine)
+{
+ /* is the pointer valid? */
+ if (engine == NULL)
+ return false;
+ /* does the engine have a name? */
+ if (engine->name == NULL)
+ return false;
+ /* is the engine type within bounds? */
+ if (engine->type >= CI_FLOW_ENGINE_MAX)
+ return false;
+ /* does the engine have ops? */
+ if (engine->ops == NULL)
+ return false;
+ /* does the engine have mandatory ctx_parse op? */
+ if (engine->ops->ctx_parse == NULL)
+ return false;
+ /* flow size cannot be less than ci_flow */
+ if (engine->flow_size < sizeof(struct ci_flow))
+ return false;
+ /* alloc and free must both be defined or NULL */
+ if ((engine->ops->flow_alloc == NULL) != (engine->ops->flow_free == NULL))
+ return false;
+ /* engine looks valid */
+ return true;
+}
+
+/* helper to validate whether an engine can be enabled - thread-unsafe */
+static inline bool
+ci_flow_engine_is_supported(const struct ci_flow_engine *engine, const struct rte_eth_dev *dev)
+{
+ /* basic checks */
+ if (!ci_flow_engine_is_valid(engine))
+ return false;
+
+ /* does it have engine-specific validation? */
+ if (engine->ops->is_available != NULL) {
+ return engine->ops->is_available(engine, dev);
+ }
+ /* no specific validation required, allow by default */
+ return true;
+}
+
+/* helper to check whether an engine is enabled in the bitmask - thread-unsafe */
+static inline bool
+ci_flow_engine_enabled(const struct ci_flow_engine_conf *conf, const struct ci_flow_engine *engine)
+{
+ return (conf->enabled_engines & (1ULL << engine->type)) != 0;
+}
+
+/* helper to enable an engine in the bitmask - thread-unsafe */
+static inline void
+ci_flow_engine_enable(struct ci_flow_engine_conf *conf, const struct ci_flow_engine *engine)
+{
+ conf->enabled_engines |= (1ULL << engine->type);
+}
+
+/* find engine by flow type in the engine list - thread-unsafe */
+static inline const struct ci_flow_engine *
+ci_flow_engine_find(const struct ci_flow_engine_list *engine_list, const size_t type)
+{
+ const struct ci_flow_engine *engine;
+
+ CI_FLOW_ENGINE_LIST_FOREACH(engine, engine_list) {
+ if (engine->type == type)
+ return engine;
+ }
+ return NULL;
+}
+
+/* get per-device engine private data by engine type - thread-unsafe */
+static inline void *
+ci_flow_engine_priv(const struct ci_flow_engine_conf *engine_conf, const size_t type)
+{
+ return engine_conf->engine_priv[type];
+}
+
+static inline struct ci_flow *
+ci_flow_alloc(const struct ci_flow_engine *engine, struct rte_eth_dev *dev)
+{
+ struct ci_flow *flow = NULL;
+ bool fallback = false;
+
+ /* if engine has custom allocator, try it first */
+ if (engine->ops->flow_alloc != NULL) {
+ flow = engine->ops->flow_alloc(engine, dev);
+ /* erase the common parts */
+ if (flow != NULL)
+ *flow = (struct ci_flow){0};
+ }
+ /* if custom allocator is not defined or failed, fallback to default allocator */
+ if (flow == NULL) {
+ flow = rte_zmalloc(NULL, engine->flow_size, 0);
+
+ /* if we are here and we have a custom allocator, that means we fell back */
+ if (flow != NULL && engine->ops->flow_alloc != NULL)
+ fallback = true;
+ }
+ /* set the engine type to enable correct deallocation in case of failure */
+ if (flow != NULL) {
+ flow->fallback_alloc = fallback;
+ flow->engine_type = engine->type;
+ }
+ return flow;
+}
+
+static inline void
+ci_flow_free(const struct ci_flow_engine *engine, struct ci_flow *flow)
+{
+ if (engine->ops->flow_free != NULL && !flow->fallback_alloc)
+ engine->ops->flow_free(flow);
+ else
+ rte_free(flow);
+}
+
+/* allocate per-device engine private data and call init - thread-unsafe */
+static inline int
+ci_flow_engine_init(const struct ci_flow_engine *engine,
+ struct ci_flow_engine_conf *engine_conf)
+{
+ void *priv = NULL;
+ int ret;
+
+ if (engine->priv_size > 0) {
+ priv = rte_zmalloc(engine->name, engine->priv_size, 0);
+ if (priv == NULL) {
+ ret = -ENOMEM;
+ goto err;
+ }
+ engine_conf->engine_priv[engine->type] = priv;
+ }
+
+ if (engine->ops->init != NULL) {
+ ret = engine->ops->init(engine, engine_conf->dev,
+ engine_conf->engine_priv[engine->type]);
+ if (ret != 0)
+ goto err;
+ }
+ return 0;
+err:
+ if (priv != NULL) {
+ rte_free(priv);
+ engine_conf->engine_priv[engine->type] = NULL;
+ }
+ return ret;
+}
+
+/* call uninit and free per-device engine private data - thread-unsafe */
+static inline void
+ci_flow_engine_uninit(const struct ci_flow_engine *engine,
+ struct ci_flow_engine_conf *engine_conf)
+{
+ void *priv = engine_conf->engine_priv[engine->type];
+
+ /* ignore uninit errors */
+ if (engine->ops->uninit != NULL)
+ engine->ops->uninit(engine, priv);
+
+ if (priv != NULL) {
+ rte_free(priv);
+ engine_conf->engine_priv[engine->type] = NULL;
+ }
+}
+
+/* disable all engines for a specific driver instance - thread-safe */
+static inline void
+ci_flow_engine_conf_reset(struct ci_flow_engine_conf *engine_conf,
+ const struct ci_flow_engine_list *engine_list)
+{
+ const struct ci_flow_engine *engine;
+ struct ci_flow *flow, *tmp;
+
+ /* lock the config */
+ rte_rwlock_write_lock(&engine_conf->config_lock);
+
+ /* free all flows - shouldn't have any at this point */
+ RTE_TAILQ_FOREACH_SAFE(flow, &engine_conf->flows, node, tmp) {
+ engine = ci_flow_engine_find(engine_list, flow->engine_type);
+ ci_flow_free(engine, flow);
+ TAILQ_REMOVE(&engine_conf->flows, flow, node);
+ }
+
+ CI_FLOW_ENGINE_LIST_FOREACH(engine, engine_list) {
+ if (!ci_flow_engine_enabled(engine_conf, engine))
+ continue;
+ ci_flow_engine_uninit(engine, engine_conf);
+ }
+
+ /* disable all engines */
+ engine_conf->enabled_engines = 0;
+
+ /* erase device pointer */
+ engine_conf->dev = NULL;
+
+ /* unlock the config */
+ rte_rwlock_write_unlock(&engine_conf->config_lock);
+}
+
+/* enable all engines for a specific driver instance - thread-unsafe */
+static inline void
+ci_flow_engine_conf_init(struct ci_flow_engine_conf *engine_conf,
+ const struct ci_flow_engine_list *engine_list,
+ struct rte_eth_dev *dev)
+{
+ const struct ci_flow_engine *engine;
+
+ /* init the lock */
+ rte_rwlock_init(&engine_conf->config_lock);
+
+ /* store device pointer */
+ engine_conf->dev = dev;
+
+ /* init the flow list */
+ TAILQ_INIT(&engine_conf->flows);
+
+ /* enable all engines */
+ CI_FLOW_ENGINE_LIST_FOREACH(engine, engine_list) {
+ if (!ci_flow_engine_is_supported(engine, dev))
+ continue;
+
+ if (ci_flow_engine_init(engine, engine_conf) != 0)
+ continue;
+
+ ci_flow_engine_enable(engine_conf, engine);
+ }
+}
+
+/* validate whether a flow is valid for a specific engine configuration - thread-unsafe */
+static inline bool
+ci_flow_is_valid(const struct ci_flow *flow,
+ const struct ci_flow_engine_conf *engine_conf,
+ const struct ci_flow_engine_list *engine_list)
+{
+ const struct ci_flow_engine *engine;
+
+ /* is the pointer valid? */
+ if (flow == NULL)
+ return false;
+ /* does the flow belong to this device? */
+ if (flow->dev != engine_conf->dev)
+ return false;
+ /* can we find the engine that created this flow? */
+ engine = ci_flow_engine_find(engine_list, flow->engine_type);
+ if (engine == NULL)
+ return false;
+ /* engine must be valid */
+ if (!ci_flow_engine_is_valid(engine))
+ return false;
+ /* engine must be enabled */
+ if (!ci_flow_engine_enabled(engine_conf, engine))
+ return false;
+ /* flow looks valid */
+ return true;
+}
+
+/* default empty pattern graph definitions */
+enum ci_flow_empty_graph_node_id {
+ CI_FLOW_EMPTY_GRAPH_NODE_START = RTE_FLOW_NODE_FIRST,
+ CI_FLOW_EMPTY_GRAPH_NODE_ANY,
+ CI_FLOW_EMPTY_GRAPH_NODE_END,
+};
+
+static const struct rte_flow_graph ci_flow_empty_graph = {
+ .nodes = (struct rte_flow_graph_node []) {
+ [CI_FLOW_EMPTY_GRAPH_NODE_START] = {
+ .name = "START",
+ },
+ [CI_FLOW_EMPTY_GRAPH_NODE_ANY] = {
+ .name = "ANY",
+ .type = RTE_FLOW_ITEM_TYPE_ANY,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [CI_FLOW_EMPTY_GRAPH_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge []) {
+ [CI_FLOW_EMPTY_GRAPH_NODE_START] = {
+ .next = (const size_t []) {
+ CI_FLOW_EMPTY_GRAPH_NODE_ANY,
+ CI_FLOW_EMPTY_GRAPH_NODE_END,
+ RTE_FLOW_NODE_EDGE_END,
+ },
+ },
+ [CI_FLOW_EMPTY_GRAPH_NODE_ANY] = {
+ .next = (const size_t []) {
+ CI_FLOW_EMPTY_GRAPH_NODE_END,
+ RTE_FLOW_NODE_EDGE_END,
+ },
+ },
+ }
+};
+
+/* parse a flow using a specific engine - thread-unsafe */
+static inline int
+ci_flow_parse(const struct ci_flow_engine_conf *engine_conf,
+ const struct ci_flow_engine *engine,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct ci_flow_engine_ctx *ctx;
+ int ret = 0;
+
+ /* engines that aren't enabled cannot be used for validation */
+ if (!ci_flow_engine_enabled(engine_conf, engine)) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Flow engine is not enabled");
+ }
+
+ /* allocate context */
+ ctx = calloc(1, RTE_MAX(engine->ctx_size, sizeof(struct ci_flow_engine_ctx)));
+ if (ctx == NULL) {
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "Failed to allocate memory for rule engine context");
+ }
+ ctx->dev = engine_conf->dev;
+ flow->dev = engine_conf->dev;
+
+ /* parse flow parameters */
+ ret = engine->ops->ctx_parse(actions, attr, ctx, error);
+
+ /* context init failed - that means engine can't be used for this flow */
+ if (ret != 0)
+ goto free_ctx;
+
+ /* NULL pattern is allowed for engines that don't match any patterns */
+ if (pattern == NULL && engine->graph != NULL) {
+ ret = rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "Pattern cannot be NULL");
+ goto free_ctx;
+ } else if (pattern != NULL) {
+ const struct rte_flow_graph *graph = engine->graph;
+
+ /* if graph isn't provided, use empty graph */
+ if (graph == NULL)
+ graph = &ci_flow_empty_graph;
+
+ ret = rte_flow_graph_parse(graph, pattern, error, ctx);
+ }
+
+ /* if graph parsing failed, pattern didn't match */
+ if (ret != 0)
+ goto free_ctx;
+
+ /* final verification, if the operation is defined */
+ if (engine->ops->ctx_validate != NULL)
+ ret = engine->ops->ctx_validate(ctx, error);
+
+ /* finalization failed - mismatch between parsed data and context data */
+ if (ret != 0)
+ goto free_ctx;
+
+ /* if we need to build rules from context, do it */
+ if (engine->ops->ctx_to_flow != NULL) {
+ ret = engine->ops->ctx_to_flow(ctx, flow, error);
+
+ /* flow building failed - something wrong with context data */
+ if (ret != 0)
+ goto free_ctx;
+ }
+ /* success */
+ ret = 0;
+
+free_ctx:
+ free(ctx);
+ return ret;
+}
+
+/* uninstall a flow using its appropriate engine - thread-unsafe */
+static inline int
+ci_flow_uninstall(const struct ci_flow_engine_list *engine_list,
+ struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ const struct ci_flow_engine *engine;
+
+ /* find the engine that created this flow */
+ engine = ci_flow_engine_find(engine_list, flow->engine_type);
+ if (engine == NULL) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Flow engine that created this flow is not available");
+ }
+
+ /* uninstall the flow if required */
+ if (engine->ops->flow_uninstall != NULL) {
+ return engine->ops->flow_uninstall(flow, error);
+ }
+
+ return 0;
+}
+
+/*
+ * The following functions are designed to be called from the context of
+ * rte_flow API implementations and are meant to be used as reference/default
+ * implementations.
+ *
+ * Thread-safe.
+ */
+
+/* default implementation of rte_flow_create using flow engines */
+static inline struct rte_flow *
+ci_flow_create(struct ci_flow_engine_conf *engine_conf,
+ const struct ci_flow_engine_list *engine_list,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ const struct ci_flow_engine *engine;
+ struct ci_flow *flow = NULL;
+ int ret;
+
+ if (attr == NULL || actions == NULL) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR, NULL,
+ "Attributes and actions cannot be NULL");
+ return NULL;
+ }
+
+ /* lock the config for writing */
+ rte_rwlock_write_lock(&engine_conf->config_lock);
+
+ /* find an engine that can handle this flow */
+ CI_FLOW_ENGINE_LIST_FOREACH(engine, engine_list) {
+ if (!ci_flow_engine_enabled(engine_conf, engine))
+ continue;
+
+ flow = ci_flow_alloc(engine, engine_conf->dev);
+ if (flow == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "Failed to allocate memory for flow rule");
+ /* this is a serious error so don't continue */
+ goto unlock;
+ }
+
+ ret = ci_flow_parse(engine_conf, engine, attr, pattern,
+ actions, flow, error);
+
+ /* if successfully parsed, install */
+ if (ret == 0 && engine->ops->flow_install != NULL) {
+ ret = engine->ops->flow_install(flow, error);
+ }
+
+ if (ret == 0) {
+ /* success */
+ goto unlock;
+ }
+
+ /* parsing failed - free the flow and try next engine */
+ ci_flow_free(engine, flow);
+ }
+
+ /* no engine could handle this flow */
+ flow = NULL;
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "No flow engine could handle the requested flow");
+unlock:
+ rte_rwlock_write_unlock(&engine_conf->config_lock);
+
+ return (struct rte_flow *)flow;
+}
+
+/* default implementation of rte_flow_validate using flow engines */
+static inline int
+ci_flow_validate(struct ci_flow_engine_conf *engine_conf,
+ const struct ci_flow_engine_list *engine_list,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ const struct ci_flow_engine *engine;
+ int ret;
+
+ if (attr == NULL || actions == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR, NULL,
+ "Attributes and actions cannot be NULL");
+ }
+
+ /* lock the config for reading */
+ rte_rwlock_read_lock(&engine_conf->config_lock);
+
+ /* find an engine that can handle this flow */
+ CI_FLOW_ENGINE_LIST_FOREACH(engine, engine_list) {
+ struct ci_flow *flow;
+
+ if (!ci_flow_engine_enabled(engine_conf, engine))
+ continue;
+
+ /* use OS allocator as we're not keeping the flow */
+ flow = calloc(1, engine->flow_size);
+ if (flow == NULL) {
+ /* this is a serious error so don't continue */
+ ret = rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "Failed to allocate memory for flow rule");
+ goto unlock;
+ }
+ /* try to parse the flow with this engine */
+ ret = ci_flow_parse(engine_conf, engine, attr, pattern,
+ actions, flow, error);
+ free(flow);
+
+ if (ret == 0) {
+ /* success */
+ goto unlock;
+ }
+ }
+ /* no engine could handle this flow */
+ ret = rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "No flow engine could handle the requested flow");
+unlock:
+ rte_rwlock_read_unlock(&engine_conf->config_lock);
+ return ret;
+}
+
+/* default implementation of rte_flow_destroy using flow engines. */
+static inline int
+ci_flow_destroy(struct ci_flow_engine_conf *engine_conf,
+ const struct ci_flow_engine_list *engine_list,
+ struct rte_flow *rte_flow,
+ struct rte_flow_error *error)
+{
+ struct ci_flow *flow = (struct ci_flow *)rte_flow;
+ const struct ci_flow_engine *engine;
+ int ret = 0;
+
+ if (rte_flow == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "Flow handle cannot be NULL");
+ }
+
+ /* lock the config for writing */
+ rte_rwlock_write_lock(&engine_conf->config_lock);
+
+ /* find the engine that created this flow */
+ engine = ci_flow_engine_find(engine_list, flow->engine_type);
+
+ /* validate the flow */
+ if (!ci_flow_is_valid(flow, engine_conf, engine_list) || engine == NULL) {
+ ret = rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "Invalid flow handle");
+ goto unlock;
+ }
+
+ ret = ci_flow_uninstall(engine_list, flow, error);
+
+ if (ret != 0)
+ goto unlock;
+
+ /* remove the flow from the list and free it */
+ TAILQ_REMOVE(&engine_conf->flows, flow, node);
+ ci_flow_free(engine, flow);
+unlock:
+ rte_rwlock_write_unlock(&engine_conf->config_lock);
+
+ return ret;
+}
+
+/* default implementation of rte_flow_flush using flow engines */
+static inline int
+ci_flow_flush(struct ci_flow_engine_conf *engine_conf,
+ const struct ci_flow_engine_list *engine_list,
+ struct rte_flow_error *error)
+{
+ struct ci_flow *flow, *tmp;
+
+ /* lock the config for writing */
+ rte_rwlock_write_lock(&engine_conf->config_lock);
+
+ /* iterate over all flows and uninstall them */
+ RTE_TAILQ_FOREACH_SAFE(flow, &engine_conf->flows, node, tmp) {
+ const struct ci_flow_engine *engine;
+
+ /* find the engine that created this flow */
+ engine = ci_flow_engine_find(engine_list, flow->engine_type);
+ /* shouldn't happen */
+ if (engine == NULL)
+ continue;
+
+ /* ignore failures */
+ ci_flow_uninstall(engine_list, flow, error);
+
+ TAILQ_REMOVE(&engine_conf->flows, flow, node);
+ ci_flow_free(engine, flow);
+ }
+
+ rte_rwlock_write_unlock(&engine_conf->config_lock);
+
+ return 0;
+}
+
+/* default implementation of rte_flow_query using flow engines */
+static inline int
+ci_flow_query(struct ci_flow_engine_conf *engine_conf,
+ const struct ci_flow_engine_list *engine_list,
+ struct rte_flow *rte_flow,
+ const struct rte_flow_action *action,
+ void *data,
+ struct rte_flow_error *error)
+{
+ struct ci_flow *flow = (struct ci_flow *)rte_flow;
+ const struct ci_flow_engine *engine;
+ int ret;
+
+ if (action == NULL || data == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "Action or data cannot be NULL");
+ }
+
+ /* lock the config for reading */
+ rte_rwlock_read_lock(&engine_conf->config_lock);
+
+ /* validate the flow first */
+ if (!ci_flow_is_valid(flow, engine_conf, engine_list)) {
+ ret = rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "Invalid flow handle");
+ goto unlock;
+ }
+ /* find the engine that created this flow */
+ engine = ci_flow_engine_find(engine_list, flow->engine_type);
+ if (engine == NULL) {
+ ret = rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Flow engine that created this flow is not available");
+ goto unlock;
+ }
+ /* query the flow if supported */
+ if (engine->ops->flow_query != NULL) {
+ ret = engine->ops->flow_query(flow, action, data, error);
+ } else {
+ ret = rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Flow engine does not support querying");
+ }
+unlock:
+ rte_rwlock_read_unlock(&engine_conf->config_lock);
+
+ return ret;
+}
+
+#endif /* _COMMON_INTEL_FLOW_ENGINE_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 03/21] net/intel/common: add utility functions
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 01/21] ethdev: add flow graph API Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 02/21] net/intel/common: add flow engines infrastructure Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 04/21] net/ixgbe: add support for common flow parsing Anatoly Burakov
` (18 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
A lot of parsers will rely on doing the same things over and over, so
create a header with utility functions to aid in writing parsers.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/common/flow_util.h | 165 +++++++++++++++++++++++++++
1 file changed, 165 insertions(+)
create mode 100644 drivers/net/intel/common/flow_util.h
diff --git a/drivers/net/intel/common/flow_util.h b/drivers/net/intel/common/flow_util.h
new file mode 100644
index 0000000000..1337e6c55d
--- /dev/null
+++ b/drivers/net/intel/common/flow_util.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Intel Corporation
+ */
+
+#ifndef _INTEL_COMMON_FLOW_UTIL_H_
+#define _INTEL_COMMON_FLOW_UTIL_H_
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <string.h>
+
+/*
+ * Utility functions primarily intended for flow parsers.
+ */
+
+/**
+ * ci_is_all_byte - Check if memory region is filled with a specific byte value.
+ *
+ * @param ptr: Pointer to memory region
+ * @param len: Length in bytes
+ * @param val: Byte value to check (e.g., 0x00 or 0xFF)
+ *
+ * @return true if all bytes equal @val, false otherwise
+ */
+static inline bool
+ci_is_all_byte(const void *ptr, size_t len, uint8_t val)
+{
+ const uint8_t *bytes = ptr;
+ const uint32_t pattern32 = 0x01010101U * val;
+ size_t i = 0;
+
+ /* Process 4-byte chunks using memcpy */
+ for (; i + 4 <= len; i += 4) {
+ uint32_t chunk;
+ memcpy(&chunk, bytes + i, 4);
+ if (chunk != pattern32)
+ return false;
+ }
+
+ /* Process remaining bytes */
+ for (; i < len; i++) {
+ if (bytes[i] != val)
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * ci_is_all_zero_or_masked - Check if bytes are all 0x00 OR all 0xFF.
+ * @param ptr: Pointer to memory region
+ * @param len: Length in bytes
+ *
+ * @return true if all bytes are 0x00 OR all bytes are 0xFF, false otherwise
+ */
+static inline bool
+ci_is_all_zero_or_masked(const void *ptr, size_t len)
+{
+ const uint8_t *bytes = (const uint8_t *)ptr;
+ uint8_t first_val;
+
+ /* zero length cannot be valid */
+ if (len == 0)
+ return false;
+
+ first_val = bytes[0];
+
+ if (first_val != 0x00 && first_val != 0xFF)
+ return false;
+
+ return ci_is_all_byte(ptr, len, first_val);
+}
+
+/**
+ * ci_is_zero_or_masked - Check if a value is zero OR matches the mask exactly.
+ *
+ * @param value: Data value to check
+ * @param mask: Mask to compare against
+ *
+ * @return true if (value == 0) OR (value == mask), false otherwise
+ *
+ * Usage notes: this is intended for bitfields e.g. VLAN_TCI
+ * For byte-aligned fields, use CI_FIELD_IS_ZERO_OR_MASKED below.
+ */
+static inline bool
+ci_is_zero_or_masked(uint64_t value, uint64_t mask)
+{
+ uint64_t masked_val = value & mask;
+
+ return masked_val == 0 || masked_val == mask;
+}
+
+/**
+ * Check if a struct field is fully masked or unmasked.
+ *
+ * @param field_ptr: Pointer to the mask field (e.g., ð_mask->hdr.src_addr)
+ */
+#define CI_FIELD_IS_ZERO_OR_MASKED(field_ptr) \
+ ci_is_all_zero_or_masked((field_ptr), sizeof(*(field_ptr)))
+
+/**
+ * Check if a struct field is all 0x00.
+ *
+ * @param field_ptr: Pointer to the mask field (e.g., ð_mask->hdr.src_addr)
+ */
+#define CI_FIELD_IS_ZERO(field_ptr) \
+ ci_is_all_byte((field_ptr), sizeof(*(field_ptr)), 0x00)
+
+/**
+ * Check if a struct field is all 0xFF.
+ *
+ * @param field_ptr: Pointer to the mask field (e.g., ð_mask->hdr.src_addr)
+ */
+#define CI_FIELD_IS_MASKED(field_ptr) \
+ ci_is_all_byte((field_ptr), sizeof(*(field_ptr)), 0xFF)
+
+/**
+ * ci_be24_to_cpu - Convert 24-bit big-endian value to host byte order.
+ * @param val: Pointer to 3-byte big-endian value
+ *
+ * @return uint32_t value in host byte order
+ *
+ * Usage: extract 24-bit Big-Endian values (e.g. VXLAN VNI).
+ */
+static inline uint32_t
+ci_be24_to_cpu(const uint8_t val[3])
+{
+ return (val[0] << 16) | (val[1] << 8) | val[2];
+}
+
+/**
+ * ci_is_hex_char - Check if a character is a valid hexadecimal digit.
+ *
+ * @param c: Character to check
+ *
+ * @return true if c is in [0-9a-fA-F], false otherwise
+ */
+static inline bool
+ci_is_hex_char(unsigned char c)
+{
+ return ((c >= '0' && c <= '9') ||
+ (c >= 'a' && c <= 'f') ||
+ (c >= 'A' && c <= 'F'));
+}
+
+/**
+ * ci_hex_char_to_nibble - Convert hex character to 4-bit value.
+ *
+ * @param c: Hex character ('0'-'9', 'a'-'f', 'A'-'F')
+ *
+ * @return Value 0-15, or 0 if invalid
+ */
+static inline unsigned char
+ci_hex_char_to_nibble(unsigned char c)
+{
+ if (c >= '0' && c <= '9')
+ return c - '0';
+ if (c >= 'a' && c <= 'f')
+ return c - 'a' + 10;
+ if (c >= 'A' && c <= 'F')
+ return c - 'A' + 10;
+ return 0;
+}
+
+#endif /* _INTEL_COMMON_FLOW_UTIL_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 04/21] net/ixgbe: add support for common flow parsing
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (2 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 03/21] net/intel/common: add utility functions Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 05/21] net/ixgbe: reimplement ethertype parser Anatoly Burakov
` (17 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Implement support for common flow parsing infrastructure in preparation for
migration of flow engines.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 12 ++++++--
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 5 +++
drivers/net/intel/ixgbe/ixgbe_flow.c | 42 +++++++++++++++++++++++++-
drivers/net/intel/ixgbe/ixgbe_flow.h | 12 ++++++++
4 files changed, 68 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow.h
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 57d929cf2c..b4435acd20 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -46,6 +46,7 @@
#include "base/ixgbe_phy.h"
#include "base/ixgbe_osdep.h"
#include "ixgbe_regs.h"
+#include "ixgbe_flow.h"
/*
* High threshold controlling when to start sending XOFF frames. Must be at
@@ -1342,6 +1343,10 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
/* initialize Traffic Manager configuration */
ixgbe_tm_conf_init(eth_dev);
+ /* initialize flow engine configuration */
+ ci_flow_engine_conf_init(&ad->flow_engine_conf,
+ &ixgbe_flow_engine_list, eth_dev);
+
return 0;
err_l2_tn_filter_init:
@@ -3080,8 +3085,8 @@ ixgbe_dev_set_link_down(struct rte_eth_dev *dev)
static int
ixgbe_dev_close(struct rte_eth_dev *dev)
{
- struct ixgbe_hw *hw =
- IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ixgbe_adapter *ad = dev->data->dev_private;
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(ad);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
@@ -3145,6 +3150,9 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
rte_free(dev->security_ctx);
dev->security_ctx = NULL;
+ /* reset rte_flow config */
+ ci_flow_engine_conf_reset(&ad->flow_engine_conf, &ixgbe_flow_engine_list);
+
return ret;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 32d7b98ed1..eaeeb35dea 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -22,6 +22,8 @@
#include <bus_pci_driver.h>
#include <rte_tm_driver.h>
+#include "../common/flow_engine.h"
+
/* need update link, bit flag */
#define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
#define IXGBE_FLAG_MAILBOX (uint32_t)(1 << 1)
@@ -344,6 +346,7 @@ struct ixgbe_l2_tn_info {
};
struct rte_flow {
+ struct ci_flow flow;
enum rte_filter_type filter_type;
/* security flows are not rte_filter_type */
bool is_security;
@@ -486,6 +489,8 @@ struct ixgbe_adapter {
struct rte_timecounter tx_tstamp_tc;
struct ixgbe_tm_conf tm_conf;
+ struct ci_flow_engine_conf flow_engine_conf;
+
/* For RSS reta table update */
uint8_t rss_reta_updated;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index d36e276ee0..3f33d28207 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -30,6 +30,7 @@
#include <rte_hash_crc.h>
#include <rte_flow.h>
#include <rte_flow_driver.h>
+#include <rte_tailq.h>
#include "ixgbe_logs.h"
#include "base/ixgbe_api.h"
@@ -44,7 +45,7 @@
#include "rte_pmd_ixgbe.h"
#include "../common/flow_check.h"
-
+#include "../common/flow_engine.h"
#define IXGBE_MIN_N_TUPLE_PRIO 1
#define IXGBE_MAX_N_TUPLE_PRIO 7
@@ -102,6 +103,8 @@ static struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
static struct ixgbe_rss_filter_list filter_rss_list;
static struct ixgbe_flow_mem_list ixgbe_flow_list;
+const struct ci_flow_engine_list ixgbe_flow_engine_list = {0};
+
/**
* Endless loop will never happen with below assumption
* 1. there is at least one no-void item(END)
@@ -2789,6 +2792,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
+ struct ixgbe_adapter *ad = dev->data->dev_private;
int ret;
struct rte_eth_ntuple_filter ntuple_filter;
struct rte_eth_ethertype_filter ethertype_filter;
@@ -2808,6 +2812,15 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
uint8_t first_mask = FALSE;
+ /* try the new flow engine first */
+ flow = ci_flow_create(&ad->flow_engine_conf, &ixgbe_flow_engine_list,
+ attr, pattern, actions, error);
+ if (flow != NULL) {
+ return flow;
+ }
+
+ /* fall back to legacy flow engines */
+
flow = rte_zmalloc("ixgbe_rte_flow", sizeof(struct rte_flow), 0);
if (!flow) {
PMD_DRV_LOG(ERR, "failed to allocate memory");
@@ -3052,6 +3065,7 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
+ struct ixgbe_adapter *ad = dev->data->dev_private;
struct rte_eth_ntuple_filter ntuple_filter;
struct rte_eth_ethertype_filter ethertype_filter;
struct rte_eth_syn_filter syn_filter;
@@ -3060,6 +3074,14 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
+ /* try the new flow engine first */
+ ret = ci_flow_validate(&ad->flow_engine_conf, &ixgbe_flow_engine_list,
+ attr, pattern, actions, error);
+ if (ret == 0)
+ return ret;
+
+ /* fall back to legacy engines */
+
/**
* Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
*/
@@ -3110,6 +3132,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
struct rte_flow *flow,
struct rte_flow_error *error)
{
+ struct ixgbe_adapter *ad = dev->data->dev_private;
int ret;
struct rte_flow *pmd_flow = flow;
enum rte_filter_type filter_type = pmd_flow->filter_type;
@@ -3128,6 +3151,15 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
struct ixgbe_rss_conf_ele *rss_filter_ptr;
+ /* try the new flow engine first */
+ ret = ci_flow_destroy(&ad->flow_engine_conf,
+ &ixgbe_flow_engine_list, flow, error);
+ if (ret == 0) {
+ return 0;
+ }
+
+ /* fall back to legacy engines */
+
/* Special case for SECURITY flows */
if (flow->is_security) {
ret = 0;
@@ -3245,8 +3277,16 @@ static int
ixgbe_flow_flush(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
+ struct ixgbe_adapter *ad = dev->data->dev_private;
int ret = 0;
+ /* flush all flows from the new flow engine */
+ ret = ci_flow_flush(&ad->flow_engine_conf, &ixgbe_flow_engine_list, error);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to flush flow");
+ return ret;
+ }
+
ixgbe_clear_all_ntuple_filter(dev);
ixgbe_clear_all_ethertype_filter(dev);
ixgbe_clear_syn_filter(dev);
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.h b/drivers/net/intel/ixgbe/ixgbe_flow.h
new file mode 100644
index 0000000000..5e68c9886c
--- /dev/null
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#ifndef _IXGBE_FLOW_H_
+#define _IXGBE_FLOW_H_
+
+#include "../common/flow_engine.h"
+
+extern const struct ci_flow_engine_list ixgbe_flow_engine_list;
+
+#endif /* _IXGBE_FLOW_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 05/21] net/ixgbe: reimplement ethertype parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (3 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 04/21] net/ixgbe: add support for common flow parsing Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 06/21] net/ixgbe: reimplement syn parser Anatoly Burakov
` (16 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Use the new flow graph API and the common parsing framework to implement
flow parser for ethertype.
The old ethertype parser was accepting certain things that were later
rejected by the actual ethertype installation code, in particular DROP
action as well as dst MAC address filtering. This was removed from the
graph parser.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 19 --
drivers/net/intel/ixgbe/ixgbe_flow.c | 238 +----------------
drivers/net/intel/ixgbe/ixgbe_flow.h | 12 +
.../net/intel/ixgbe/ixgbe_flow_ethertype.c | 240 ++++++++++++++++++
drivers/net/intel/ixgbe/meson.build | 1 +
5 files changed, 262 insertions(+), 248 deletions(-)
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_ethertype.c
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index b4435acd20..a8ceca6cc6 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -6841,25 +6841,6 @@ ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev,
int ret;
struct ixgbe_ethertype_filter ethertype_filter;
- if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
- return -EINVAL;
-
- if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
- filter->ether_type == RTE_ETHER_TYPE_IPV6) {
- PMD_DRV_LOG(ERR, "unsupported ether_type(0x%04x) in"
- " ethertype filter.", filter->ether_type);
- return -EINVAL;
- }
-
- if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
- PMD_DRV_LOG(ERR, "mac compare is unsupported.");
- return -EINVAL;
- }
- if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) {
- PMD_DRV_LOG(ERR, "drop option is unsupported.");
- return -EINVAL;
- }
-
ret = ixgbe_ethertype_filter_lookup(filter_info, filter->ether_type);
if (ret >= 0 && add) {
PMD_DRV_LOG(ERR, "ethertype (0x%04x) filter exists.",
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 3f33d28207..6dda2f6a3c 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -46,6 +46,7 @@
#include "../common/flow_check.h"
#include "../common/flow_engine.h"
+#include "ixgbe_flow.h"
#define IXGBE_MIN_N_TUPLE_PRIO 1
#define IXGBE_MAX_N_TUPLE_PRIO 7
@@ -88,7 +89,6 @@ struct ixgbe_flow_mem {
};
TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele);
-TAILQ_HEAD(ixgbe_ethertype_filter_list, ixgbe_ethertype_filter_ele);
TAILQ_HEAD(ixgbe_syn_filter_list, ixgbe_eth_syn_filter_ele);
TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
TAILQ_HEAD(ixgbe_l2_tunnel_filter_list, ixgbe_eth_l2_tunnel_conf_ele);
@@ -96,14 +96,17 @@ TAILQ_HEAD(ixgbe_rss_filter_list, ixgbe_rss_conf_ele);
TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
static struct ixgbe_ntuple_filter_list filter_ntuple_list;
-static struct ixgbe_ethertype_filter_list filter_ethertype_list;
static struct ixgbe_syn_filter_list filter_syn_list;
static struct ixgbe_fdir_rule_filter_list filter_fdir_list;
static struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
static struct ixgbe_rss_filter_list filter_rss_list;
static struct ixgbe_flow_mem_list ixgbe_flow_list;
-const struct ci_flow_engine_list ixgbe_flow_engine_list = {0};
+const struct ci_flow_engine_list ixgbe_flow_engine_list = {
+ {
+ &ixgbe_ethertype_flow_engine,
+ }
+};
/**
* Endless loop will never happen with below assumption
@@ -127,14 +130,14 @@ const struct rte_flow_item *next_no_void_pattern(
/*
* All ixgbe engines mostly check the same stuff, so use a common check.
*/
-static int
+int
ixgbe_flow_actions_check(const struct ci_flow_actions *actions,
const struct ci_flow_actions_check_param *param,
struct rte_flow_error *error)
{
const struct rte_flow_action *action;
- struct rte_eth_dev *dev = (struct rte_eth_dev *)param->driver_ctx;
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ const struct rte_eth_dev *dev = param->driver_ctx;
+ const struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
size_t idx;
for (idx = 0; idx < actions->count; idx++) {
@@ -685,169 +688,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
return 0;
}
-/**
- * Parse the rule to see if it is a ethertype rule.
- * And get the ethertype filter info BTW.
- * pattern:
- * The first not void item can be ETH.
- * The next not void item must be END.
- * action:
- * The first not void action should be QUEUE.
- * The next not void action should be END.
- * pattern example:
- * ITEM Spec Mask
- * ETH type 0x0807 0xFFFF
- * END
- * other members in mask and spec should set to 0x00.
- * item->last should be NULL.
- */
-static int
-cons_parse_ethertype_filter(const struct rte_flow_item *pattern,
- const struct rte_flow_action *action,
- struct rte_eth_ethertype_filter *filter,
- struct rte_flow_error *error)
-{
- const struct rte_flow_item *item;
- const struct rte_flow_item_eth *eth_spec;
- const struct rte_flow_item_eth *eth_mask;
-
- item = next_no_void_pattern(pattern, NULL);
- /* The first non-void item should be MAC. */
- if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ethertype filter");
- return -rte_errno;
- }
-
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- /* Get the MAC info. */
- if (!item->spec || !item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ethertype filter");
- return -rte_errno;
- }
-
- eth_spec = item->spec;
- eth_mask = item->mask;
-
- /* Mask bits of source MAC address must be full of 0.
- * Mask bits of destination MAC address must be full
- * of 1 or full of 0.
- */
- if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr) ||
- (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr) &&
- !rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr))) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid ether address mask");
- return -rte_errno;
- }
-
- if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid ethertype mask");
- return -rte_errno;
- }
-
- /* If mask bits of destination MAC address
- * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
- */
- if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr)) {
- filter->mac_addr = eth_spec->hdr.dst_addr;
- filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
- } else {
- filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
- }
- filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
-
- /* Check if the next non-void item is END. */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ethertype filter.");
- return -rte_errno;
- }
-
- filter->queue = ((const struct rte_flow_action_queue *)action->conf)->index;
-
- return 0;
-}
-
-static int
-ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
- struct rte_eth_ethertype_filter *filter, struct rte_flow_error *error)
-{
- int ret;
- struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ci_flow_actions parsed_actions;
- struct ci_flow_actions_check_param ap_param = {
- .allowed_types = (const enum rte_flow_action_type[]){
- /* only queue is allowed here */
- RTE_FLOW_ACTION_TYPE_QUEUE,
- RTE_FLOW_ACTION_TYPE_END
- },
- .max_actions = 1,
- .driver_ctx = dev,
- .check = ixgbe_flow_actions_check
- };
- const struct rte_flow_action *action;
-
- if (hw->mac.type != ixgbe_mac_82599EB &&
- hw->mac.type != ixgbe_mac_X540 &&
- hw->mac.type != ixgbe_mac_X550 &&
- hw->mac.type != ixgbe_mac_X550EM_x &&
- hw->mac.type != ixgbe_mac_X550EM_a &&
- hw->mac.type != ixgbe_mac_E610)
- return -ENOTSUP;
-
- /* validate attributes */
- ret = ci_flow_check_attr(attr, NULL, error);
- if (ret)
- return ret;
-
- /* parse requested actions */
- ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
- if (ret)
- return ret;
-
- action = parsed_actions.actions[0];
-
- ret = cons_parse_ethertype_filter(pattern, action, filter, error);
- if (ret)
- return ret;
-
- if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
- filter->ether_type == RTE_ETHER_TYPE_IPV6) {
- memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL, "IPv4/IPv6 not supported by ethertype filter");
- return -rte_errno;
- }
-
- if (filter->flags & RTE_ETHTYPE_FLAGS_MAC) {
- memset(filter, 0, sizeof(struct rte_eth_ethertype_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL, "mac compare is unsupported");
- return -rte_errno;
- }
-
- return 0;
-}
-
/**
* Parse the rule to see if it is a TCP SYN rule.
* And get the TCP SYN filter info BTW.
@@ -2709,7 +2549,6 @@ void
ixgbe_filterlist_init(void)
{
TAILQ_INIT(&filter_ntuple_list);
- TAILQ_INIT(&filter_ethertype_list);
TAILQ_INIT(&filter_syn_list);
TAILQ_INIT(&filter_fdir_list);
TAILQ_INIT(&filter_l2_tunnel_list);
@@ -2721,7 +2560,6 @@ void
ixgbe_filterlist_flush(void)
{
struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
- struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
@@ -2735,13 +2573,6 @@ ixgbe_filterlist_flush(void)
rte_free(ntuple_filter_ptr);
}
- while ((ethertype_filter_ptr = TAILQ_FIRST(&filter_ethertype_list))) {
- TAILQ_REMOVE(&filter_ethertype_list,
- ethertype_filter_ptr,
- entries);
- rte_free(ethertype_filter_ptr);
- }
-
while ((syn_filter_ptr = TAILQ_FIRST(&filter_syn_list))) {
TAILQ_REMOVE(&filter_syn_list,
syn_filter_ptr,
@@ -2795,7 +2626,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
struct ixgbe_adapter *ad = dev->data->dev_private;
int ret;
struct rte_eth_ntuple_filter ntuple_filter;
- struct rte_eth_ethertype_filter ethertype_filter;
struct rte_eth_syn_filter syn_filter;
struct ixgbe_fdir_rule fdir_rule;
struct ixgbe_l2_tunnel_conf l2_tn_filter;
@@ -2804,7 +2634,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
struct rte_flow *flow = NULL;
struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
- struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
@@ -2871,32 +2700,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
goto out;
}
- memset(ðertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
- ret = ixgbe_parse_ethertype_filter(dev, attr, pattern,
- actions, ðertype_filter, error);
- if (!ret) {
- ret = ixgbe_add_del_ethertype_filter(dev,
- ðertype_filter, TRUE);
- if (!ret) {
- ethertype_filter_ptr = rte_zmalloc(
- "ixgbe_ethertype_filter",
- sizeof(struct ixgbe_ethertype_filter_ele), 0);
- if (!ethertype_filter_ptr) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- goto out;
- }
- rte_memcpy(ðertype_filter_ptr->filter_info,
- ðertype_filter,
- sizeof(struct rte_eth_ethertype_filter));
- TAILQ_INSERT_TAIL(&filter_ethertype_list,
- ethertype_filter_ptr, entries);
- flow->rule = ethertype_filter_ptr;
- flow->filter_type = RTE_ETH_FILTER_ETHERTYPE;
- return flow;
- }
- goto out;
- }
-
memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
ret = ixgbe_parse_syn_filter(dev, attr, pattern,
actions, &syn_filter, error);
@@ -3067,7 +2870,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
{
struct ixgbe_adapter *ad = dev->data->dev_private;
struct rte_eth_ntuple_filter ntuple_filter;
- struct rte_eth_ethertype_filter ethertype_filter;
struct rte_eth_syn_filter syn_filter;
struct ixgbe_l2_tunnel_conf l2_tn_filter;
struct ixgbe_fdir_rule fdir_rule;
@@ -3095,12 +2897,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
if (!ret)
return 0;
- memset(ðertype_filter, 0, sizeof(struct rte_eth_ethertype_filter));
- ret = ixgbe_parse_ethertype_filter(dev, attr, pattern,
- actions, ðertype_filter, error);
- if (!ret)
- return 0;
-
memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
ret = ixgbe_parse_syn_filter(dev, attr, pattern,
actions, &syn_filter, error);
@@ -3137,12 +2933,10 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
struct rte_flow *pmd_flow = flow;
enum rte_filter_type filter_type = pmd_flow->filter_type;
struct rte_eth_ntuple_filter ntuple_filter;
- struct rte_eth_ethertype_filter ethertype_filter;
struct rte_eth_syn_filter syn_filter;
struct ixgbe_fdir_rule fdir_rule;
struct ixgbe_l2_tunnel_conf l2_tn_filter;
struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
- struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr;
struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
@@ -3180,20 +2974,6 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
rte_free(ntuple_filter_ptr);
}
break;
- case RTE_ETH_FILTER_ETHERTYPE:
- ethertype_filter_ptr = (struct ixgbe_ethertype_filter_ele *)
- pmd_flow->rule;
- rte_memcpy(ðertype_filter,
- ðertype_filter_ptr->filter_info,
- sizeof(struct rte_eth_ethertype_filter));
- ret = ixgbe_add_del_ethertype_filter(dev,
- ðertype_filter, FALSE);
- if (!ret) {
- TAILQ_REMOVE(&filter_ethertype_list,
- ethertype_filter_ptr, entries);
- rte_free(ethertype_filter_ptr);
- }
- break;
case RTE_ETH_FILTER_SYN:
syn_filter_ptr = (struct ixgbe_eth_syn_filter_ele *)
pmd_flow->rule;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.h b/drivers/net/intel/ixgbe/ixgbe_flow.h
index 5e68c9886c..f67937f3ea 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.h
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.h
@@ -5,8 +5,20 @@
#ifndef _IXGBE_FLOW_H_
#define _IXGBE_FLOW_H_
+#include "../common/flow_check.h"
#include "../common/flow_engine.h"
+enum ixgbe_flow_engine_type {
+ IXGBE_FLOW_ENGINE_TYPE_ETHERTYPE = 0,
+};
+
+int
+ixgbe_flow_actions_check(const struct ci_flow_actions *actions,
+ const struct ci_flow_actions_check_param *param,
+ struct rte_flow_error *error);
+
extern const struct ci_flow_engine_list ixgbe_flow_engine_list;
+extern const struct ci_flow_engine ixgbe_ethertype_flow_engine;
+
#endif /* _IXGBE_FLOW_H_ */
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow_ethertype.c b/drivers/net/intel/ixgbe/ixgbe_flow_ethertype.c
new file mode 100644
index 0000000000..b2bef3ef3a
--- /dev/null
+++ b/drivers/net/intel/ixgbe/ixgbe_flow_ethertype.c
@@ -0,0 +1,240 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include <rte_flow.h>
+#include <rte_flow_graph.h>
+#include <rte_ether.h>
+
+#include "ixgbe_ethdev.h"
+#include "ixgbe_flow.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+#include "../common/flow_engine.h"
+
+struct ixgbe_ethertype_flow {
+ struct rte_flow flow;
+ struct rte_eth_ethertype_filter filter;
+};
+
+struct ixgbe_ethertype_ctx {
+ struct ci_flow_engine_ctx base;
+ struct rte_eth_ethertype_filter filter;
+};
+
+/**
+ * Ethertype filter graph implementation
+ * Pattern: START -> ETH -> END
+ */
+
+enum ixgbe_ethertype_node_id {
+ IXGBE_ETHERTYPE_NODE_START = RTE_FLOW_NODE_FIRST,
+ IXGBE_ETHERTYPE_NODE_ETH,
+ IXGBE_ETHERTYPE_NODE_END,
+ IXGBE_ETHERTYPE_NODE_MAX,
+};
+
+static int
+ixgbe_ethertype_node_eth_validate(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_eth *eth_spec;
+ const struct rte_flow_item_eth *eth_mask;
+
+ eth_spec = item->spec;
+ eth_mask = item->mask;
+
+ /* Source MAC mask must be all zeros */
+ if (!CI_FIELD_IS_ZERO(ð_mask->hdr.src_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Source MAC filtering not supported");
+ }
+
+ /* Dest MAC mask must be all zeros */
+ if (!CI_FIELD_IS_ZERO(ð_mask->hdr.dst_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Destination MAC filtering not supported");
+ }
+
+ /* Ethertype mask must be exact match */
+ if (!CI_FIELD_IS_MASKED(ð_mask->hdr.ether_type)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Ethertype must be exactly matched");
+ }
+
+ /* IPv4/IPv6 ethertypes not supported by hardware */
+ uint16_t ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+ if (ether_type == RTE_ETHER_TYPE_IPV4 || ether_type == RTE_ETHER_TYPE_IPV6) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "IPv4/IPv6 not supported by ethertype filter");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_ethertype_node_eth_process(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_ethertype_ctx *graph_ctx = ctx;
+ const struct rte_flow_item_eth *eth_spec = item->spec;
+
+ graph_ctx->filter.ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+
+ return 0;
+}
+
+const struct rte_flow_graph ixgbe_ethertype_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [IXGBE_ETHERTYPE_NODE_START] = {
+ .name = "START",
+ },
+ [IXGBE_ETHERTYPE_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = ixgbe_ethertype_node_eth_validate,
+ .process = ixgbe_ethertype_node_eth_process,
+ },
+ [IXGBE_ETHERTYPE_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [IXGBE_ETHERTYPE_NODE_START] = {
+ .next = (const size_t[]) {
+ IXGBE_ETHERTYPE_NODE_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_ETHERTYPE_NODE_ETH] = {
+ .next = (const size_t[]) {
+ IXGBE_ETHERTYPE_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+static int
+ixgbe_flow_ethertype_ctx_parse(const struct rte_flow_action *actions,
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct ci_flow_actions parsed_actions;
+ struct ci_flow_actions_check_param ap_param = {
+ .allowed_types = (const enum rte_flow_action_type[]){
+ /* only queue is allowed here */
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .max_actions = 1,
+ .driver_ctx = ctx->dev,
+ .check = ixgbe_flow_actions_check
+ };
+ struct ixgbe_ethertype_ctx *ethertype_ctx = (struct ixgbe_ethertype_ctx *)ctx;
+ const struct rte_flow_action_queue *q_act;
+ int ret;
+
+ /* validate attributes */
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret)
+ return ret;
+
+ /* parse requested actions */
+ ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
+ if (ret)
+ return ret;
+
+ q_act = (const struct rte_flow_action_queue *)parsed_actions.actions[0]->conf;
+
+ /* set up filter action */
+ ethertype_ctx->filter.queue = q_act->index;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_ethertype_ctx_to_flow(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct ixgbe_ethertype_ctx *ethertype_ctx = (const struct ixgbe_ethertype_ctx *)ctx;
+ struct ixgbe_ethertype_flow *ethertype_flow = (struct ixgbe_ethertype_flow *)flow;
+
+ /* copy filter configuration */
+ ethertype_flow->filter = ethertype_ctx->filter;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_ethertype_install(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_ethertype_flow *ethertype_flow = (struct ixgbe_ethertype_flow *)flow;
+ int ret;
+
+ ret = ixgbe_add_del_ethertype_filter(flow->dev, ðertype_flow->filter, TRUE);
+ if (ret) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to add ethertype filter");
+ }
+ return ret;
+}
+
+static int
+ixgbe_flow_ethertype_uninstall(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_ethertype_flow *ethertype_flow = (struct ixgbe_ethertype_flow *)flow;
+ int ret;
+
+ ret = ixgbe_add_del_ethertype_filter(flow->dev, ðertype_flow->filter, FALSE);
+ if (ret) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to delete ethertype filter");
+ }
+ return ret;
+}
+
+static bool
+ixgbe_flow_ethertype_is_available(const struct ci_flow_engine *engine __rte_unused,
+ const struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ return hw->mac.type == ixgbe_mac_82599EB ||
+ hw->mac.type == ixgbe_mac_X540 ||
+ hw->mac.type == ixgbe_mac_X550 ||
+ hw->mac.type == ixgbe_mac_X550EM_x ||
+ hw->mac.type == ixgbe_mac_X550EM_a ||
+ hw->mac.type == ixgbe_mac_E610;
+}
+
+const struct ci_flow_engine_ops ixgbe_ethertype_ops = {
+ .is_available = ixgbe_flow_ethertype_is_available,
+ .ctx_parse = ixgbe_flow_ethertype_ctx_parse,
+ .ctx_to_flow = ixgbe_flow_ethertype_ctx_to_flow,
+ .flow_install = ixgbe_flow_ethertype_install,
+ .flow_uninstall = ixgbe_flow_ethertype_uninstall,
+};
+
+const struct ci_flow_engine ixgbe_ethertype_flow_engine = {
+ .name = "ixgbe_ethertype",
+ .ctx_size = sizeof(struct ixgbe_ethertype_ctx),
+ .flow_size = sizeof(struct ixgbe_ethertype_flow),
+ .type = IXGBE_FLOW_ENGINE_TYPE_ETHERTYPE,
+ .graph = &ixgbe_ethertype_graph,
+ .ops = &ixgbe_ethertype_ops,
+};
diff --git a/drivers/net/intel/ixgbe/meson.build b/drivers/net/intel/ixgbe/meson.build
index 7e737ee7b4..54d7e87de8 100644
--- a/drivers/net/intel/ixgbe/meson.build
+++ b/drivers/net/intel/ixgbe/meson.build
@@ -11,6 +11,7 @@ sources += files(
'ixgbe_ethdev.c',
'ixgbe_fdir.c',
'ixgbe_flow.c',
+ 'ixgbe_flow_ethertype.c',
'ixgbe_ipsec.c',
'ixgbe_pf.c',
'ixgbe_rxtx.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 06/21] net/ixgbe: reimplement syn parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (4 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 05/21] net/ixgbe: reimplement ethertype parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 07/21] net/ixgbe: reimplement L2 tunnel parser Anatoly Burakov
` (15 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Use the new flow graph API and the common parsing framework to implement
flow parser for SYN.
As a result of this migration, queue index validation has changed:
- queue is now validated at parse time against the configured number of Rx
queues (nb_rx_queues), rather than at install time against the hardware
maximum (IXGBE_MAX_RX_QUEUE_NUM)
- the per-function queue bound check in ixgbe_syn_filter_set() has been
removed as it is no longer needed
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 3 -
drivers/net/intel/ixgbe/ixgbe_flow.c | 272 +---------------------
drivers/net/intel/ixgbe/ixgbe_flow.h | 2 +
drivers/net/intel/ixgbe/ixgbe_flow_syn.c | 280 +++++++++++++++++++++++
drivers/net/intel/ixgbe/meson.build | 1 +
5 files changed, 285 insertions(+), 273 deletions(-)
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_syn.c
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index a8ceca6cc6..442e4d96c6 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -6461,9 +6461,6 @@ ixgbe_syn_filter_set(struct rte_eth_dev *dev,
uint32_t syn_info;
uint32_t synqf;
- if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM)
- return -EINVAL;
-
syn_info = filter_info->syn_info;
if (add) {
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 6dda2f6a3c..d99a4a7f2a 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -57,16 +57,6 @@ struct ixgbe_ntuple_filter_ele {
TAILQ_ENTRY(ixgbe_ntuple_filter_ele) entries;
struct rte_eth_ntuple_filter filter_info;
};
-/* ethertype filter list structure */
-struct ixgbe_ethertype_filter_ele {
- TAILQ_ENTRY(ixgbe_ethertype_filter_ele) entries;
- struct rte_eth_ethertype_filter filter_info;
-};
-/* syn filter list structure */
-struct ixgbe_eth_syn_filter_ele {
- TAILQ_ENTRY(ixgbe_eth_syn_filter_ele) entries;
- struct rte_eth_syn_filter filter_info;
-};
/* fdir filter list structure */
struct ixgbe_fdir_rule_ele {
TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
@@ -89,14 +79,12 @@ struct ixgbe_flow_mem {
};
TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele);
-TAILQ_HEAD(ixgbe_syn_filter_list, ixgbe_eth_syn_filter_ele);
TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
TAILQ_HEAD(ixgbe_l2_tunnel_filter_list, ixgbe_eth_l2_tunnel_conf_ele);
TAILQ_HEAD(ixgbe_rss_filter_list, ixgbe_rss_conf_ele);
TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
static struct ixgbe_ntuple_filter_list filter_ntuple_list;
-static struct ixgbe_syn_filter_list filter_syn_list;
static struct ixgbe_fdir_rule_filter_list filter_fdir_list;
static struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
static struct ixgbe_rss_filter_list filter_rss_list;
@@ -105,7 +93,8 @@ static struct ixgbe_flow_mem_list ixgbe_flow_list;
const struct ci_flow_engine_list ixgbe_flow_engine_list = {
{
&ixgbe_ethertype_flow_engine,
- }
+ &ixgbe_syn_flow_engine,
+ },
};
/**
@@ -688,205 +677,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
return 0;
}
-/**
- * Parse the rule to see if it is a TCP SYN rule.
- * And get the TCP SYN filter info BTW.
- * pattern:
- * The first not void item must be ETH.
- * The second not void item must be IPV4 or IPV6.
- * The third not void item must be TCP.
- * The next not void item must be END.
- * action:
- * The first not void action should be QUEUE.
- * The next not void action should be END.
- * pattern example:
- * ITEM Spec Mask
- * ETH NULL NULL
- * IPV4/IPV6 NULL NULL
- * TCP tcp_flags 0x02 0xFF
- * END
- * other members in mask and spec should set to 0x00.
- * item->last should be NULL.
- */
-static int
-cons_parse_syn_filter(const struct rte_flow_attr *attr, const struct rte_flow_item pattern[],
- const struct rte_flow_action_queue *q_act, struct rte_eth_syn_filter *filter,
- struct rte_flow_error *error)
-{
- const struct rte_flow_item *item;
- const struct rte_flow_item_tcp *tcp_spec;
- const struct rte_flow_item_tcp *tcp_mask;
-
-
- /* the first not void item should be MAC or IPv4 or IPv6 or TCP */
- item = next_no_void_pattern(pattern, NULL);
- if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
- item->type != RTE_FLOW_ITEM_TYPE_TCP) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by syn filter");
- return -rte_errno;
- }
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- /* Skip Ethernet */
- if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
- /* if the item is MAC, the content should be NULL */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid SYN address mask");
- return -rte_errno;
- }
-
- /* check if the next not void item is IPv4 or IPv6 */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by syn filter");
- return -rte_errno;
- }
- }
-
- /* Skip IP */
- if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
- item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
- /* if the item is IP, the content should be NULL */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid SYN mask");
- return -rte_errno;
- }
-
- /* check if the next not void item is TCP */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_TCP) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by syn filter");
- return -rte_errno;
- }
- }
-
- /* Get the TCP info. Only support SYN. */
- if (!item->spec || !item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid SYN mask");
- return -rte_errno;
- }
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- tcp_spec = item->spec;
- tcp_mask = item->mask;
- if (!(tcp_spec->hdr.tcp_flags & RTE_TCP_SYN_FLAG) ||
- tcp_mask->hdr.src_port ||
- tcp_mask->hdr.dst_port ||
- tcp_mask->hdr.sent_seq ||
- tcp_mask->hdr.recv_ack ||
- tcp_mask->hdr.data_off ||
- tcp_mask->hdr.tcp_flags != RTE_TCP_SYN_FLAG ||
- tcp_mask->hdr.rx_win ||
- tcp_mask->hdr.cksum ||
- tcp_mask->hdr.tcp_urp) {
- memset(filter, 0, sizeof(struct rte_eth_syn_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by syn filter");
- return -rte_errno;
- }
-
- /* check if the next not void item is END */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_syn_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by syn filter");
- return -rte_errno;
- }
-
- filter->queue = q_act->index;
-
- /* Support 2 priorities, the lowest or highest. */
- if (!attr->priority) {
- filter->hig_pri = 0;
- } else if (attr->priority == (uint32_t)~0U) {
- filter->hig_pri = 1;
- } else {
- memset(filter, 0, sizeof(struct rte_eth_syn_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
- attr, "Priority can be 0 or 0xFFFFFFFF");
- return -rte_errno;
- }
-
- return 0;
-}
-
-static int
-ixgbe_parse_syn_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
- struct rte_eth_syn_filter *filter, struct rte_flow_error *error)
-{
- int ret;
- struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ci_flow_actions parsed_actions;
- struct ci_flow_actions_check_param ap_param = {
- .allowed_types = (const enum rte_flow_action_type[]){
- /* only queue is allowed here */
- RTE_FLOW_ACTION_TYPE_QUEUE,
- RTE_FLOW_ACTION_TYPE_END
- },
- .driver_ctx = dev,
- .check = ixgbe_flow_actions_check,
- .max_actions = 1,
- };
- struct ci_flow_attr_check_param attr_param = {
- .allow_priority = true,
- };
- const struct rte_flow_action *action;
-
- if (hw->mac.type != ixgbe_mac_82599EB &&
- hw->mac.type != ixgbe_mac_X540 &&
- hw->mac.type != ixgbe_mac_X550 &&
- hw->mac.type != ixgbe_mac_X550EM_x &&
- hw->mac.type != ixgbe_mac_X550EM_a &&
- hw->mac.type != ixgbe_mac_E610)
- return -ENOTSUP;
-
- /* validate attributes */
- ret = ci_flow_check_attr(attr, &attr_param, error);
- if (ret)
- return ret;
-
- /* parse requested actions */
- ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
- if (ret)
- return ret;
-
- action = parsed_actions.actions[0];
-
- return cons_parse_syn_filter(attr, pattern, action->conf, filter, error);
-}
-
/**
* Parse the rule to see if it is a L2 tunnel rule.
* And get the L2 tunnel filter info BTW.
@@ -2549,7 +2339,6 @@ void
ixgbe_filterlist_init(void)
{
TAILQ_INIT(&filter_ntuple_list);
- TAILQ_INIT(&filter_syn_list);
TAILQ_INIT(&filter_fdir_list);
TAILQ_INIT(&filter_l2_tunnel_list);
TAILQ_INIT(&filter_rss_list);
@@ -2560,7 +2349,6 @@ void
ixgbe_filterlist_flush(void)
{
struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
- struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
@@ -2573,13 +2361,6 @@ ixgbe_filterlist_flush(void)
rte_free(ntuple_filter_ptr);
}
- while ((syn_filter_ptr = TAILQ_FIRST(&filter_syn_list))) {
- TAILQ_REMOVE(&filter_syn_list,
- syn_filter_ptr,
- entries);
- rte_free(syn_filter_ptr);
- }
-
while ((l2_tn_filter_ptr = TAILQ_FIRST(&filter_l2_tunnel_list))) {
TAILQ_REMOVE(&filter_l2_tunnel_list,
l2_tn_filter_ptr,
@@ -2626,7 +2407,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
struct ixgbe_adapter *ad = dev->data->dev_private;
int ret;
struct rte_eth_ntuple_filter ntuple_filter;
- struct rte_eth_syn_filter syn_filter;
struct ixgbe_fdir_rule fdir_rule;
struct ixgbe_l2_tunnel_conf l2_tn_filter;
struct ixgbe_hw_fdir_info *fdir_info =
@@ -2634,7 +2414,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
struct ixgbe_rte_flow_rss_conf rss_conf;
struct rte_flow *flow = NULL;
struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
- struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_rss_conf_ele *rss_filter_ptr;
@@ -2700,31 +2479,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
goto out;
}
- memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
- ret = ixgbe_parse_syn_filter(dev, attr, pattern,
- actions, &syn_filter, error);
- if (!ret) {
- ret = ixgbe_syn_filter_set(dev, &syn_filter, TRUE);
- if (!ret) {
- syn_filter_ptr = rte_zmalloc("ixgbe_syn_filter",
- sizeof(struct ixgbe_eth_syn_filter_ele), 0);
- if (!syn_filter_ptr) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- goto out;
- }
- rte_memcpy(&syn_filter_ptr->filter_info,
- &syn_filter,
- sizeof(struct rte_eth_syn_filter));
- TAILQ_INSERT_TAIL(&filter_syn_list,
- syn_filter_ptr,
- entries);
- flow->rule = syn_filter_ptr;
- flow->filter_type = RTE_ETH_FILTER_SYN;
- return flow;
- }
- goto out;
- }
-
memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
ret = ixgbe_parse_fdir_filter(dev, attr, pattern,
actions, &fdir_rule, error);
@@ -2870,7 +2624,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
{
struct ixgbe_adapter *ad = dev->data->dev_private;
struct rte_eth_ntuple_filter ntuple_filter;
- struct rte_eth_syn_filter syn_filter;
struct ixgbe_l2_tunnel_conf l2_tn_filter;
struct ixgbe_fdir_rule fdir_rule;
struct ixgbe_rte_flow_rss_conf rss_conf;
@@ -2897,12 +2650,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
if (!ret)
return 0;
- memset(&syn_filter, 0, sizeof(struct rte_eth_syn_filter));
- ret = ixgbe_parse_syn_filter(dev, attr, pattern,
- actions, &syn_filter, error);
- if (!ret)
- return 0;
-
memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
ret = ixgbe_parse_fdir_filter(dev, attr, pattern,
actions, &fdir_rule, error);
@@ -2933,11 +2680,9 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
struct rte_flow *pmd_flow = flow;
enum rte_filter_type filter_type = pmd_flow->filter_type;
struct rte_eth_ntuple_filter ntuple_filter;
- struct rte_eth_syn_filter syn_filter;
struct ixgbe_fdir_rule fdir_rule;
struct ixgbe_l2_tunnel_conf l2_tn_filter;
struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
- struct ixgbe_eth_syn_filter_ele *syn_filter_ptr;
struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
@@ -2974,19 +2719,6 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
rte_free(ntuple_filter_ptr);
}
break;
- case RTE_ETH_FILTER_SYN:
- syn_filter_ptr = (struct ixgbe_eth_syn_filter_ele *)
- pmd_flow->rule;
- rte_memcpy(&syn_filter,
- &syn_filter_ptr->filter_info,
- sizeof(struct rte_eth_syn_filter));
- ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE);
- if (!ret) {
- TAILQ_REMOVE(&filter_syn_list,
- syn_filter_ptr, entries);
- rte_free(syn_filter_ptr);
- }
- break;
case RTE_ETH_FILTER_FDIR:
fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
rte_memcpy(&fdir_rule,
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.h b/drivers/net/intel/ixgbe/ixgbe_flow.h
index f67937f3ea..3a5d0299b3 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.h
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.h
@@ -10,6 +10,7 @@
enum ixgbe_flow_engine_type {
IXGBE_FLOW_ENGINE_TYPE_ETHERTYPE = 0,
+ IXGBE_FLOW_ENGINE_TYPE_SYN,
};
int
@@ -20,5 +21,6 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions,
extern const struct ci_flow_engine_list ixgbe_flow_engine_list;
extern const struct ci_flow_engine ixgbe_ethertype_flow_engine;
+extern const struct ci_flow_engine ixgbe_syn_flow_engine;
#endif /* _IXGBE_FLOW_H_ */
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow_syn.c b/drivers/net/intel/ixgbe/ixgbe_flow_syn.c
new file mode 100644
index 0000000000..6cde38c326
--- /dev/null
+++ b/drivers/net/intel/ixgbe/ixgbe_flow_syn.c
@@ -0,0 +1,280 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include <rte_flow.h>
+#include <rte_flow_graph.h>
+#include <rte_ether.h>
+
+#include "ixgbe_ethdev.h"
+#include "ixgbe_flow.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+#include "../common/flow_engine.h"
+
+struct ixgbe_syn_flow {
+ struct rte_flow flow;
+ struct rte_eth_syn_filter syn;
+};
+
+struct ixgbe_syn_ctx {
+ struct ci_flow_engine_ctx base;
+ struct rte_eth_syn_filter syn;
+};
+
+/**
+ * SYN filter graph implementation
+ * Pattern: START -> [ETH] -> (IPV4|IPV6) -> TCP -> END
+ */
+
+enum ixgbe_syn_node_id {
+ IXGBE_SYN_NODE_START = RTE_FLOW_NODE_FIRST,
+ IXGBE_SYN_NODE_ETH,
+ IXGBE_SYN_NODE_IPV4,
+ IXGBE_SYN_NODE_IPV6,
+ IXGBE_SYN_NODE_TCP,
+ IXGBE_SYN_NODE_END,
+ IXGBE_SYN_NODE_MAX,
+};
+
+static int
+ixgbe_validate_syn_tcp(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_tcp *tcp_spec;
+ const struct rte_flow_item_tcp *tcp_mask;
+
+ tcp_spec = item->spec;
+ tcp_mask = item->mask;
+
+ /* SYN flag must be set in spec */
+ if (!(tcp_spec->hdr.tcp_flags & RTE_TCP_SYN_FLAG)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "TCP SYN flag must be set");
+ }
+
+ /* Mask must match only SYN flag */
+ if (tcp_mask->hdr.tcp_flags != RTE_TCP_SYN_FLAG) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "TCP flags mask must match SYN only");
+ }
+
+ /* All other TCP fields must have zero mask */
+ if (tcp_mask->hdr.src_port ||
+ tcp_mask->hdr.dst_port ||
+ tcp_mask->hdr.sent_seq ||
+ tcp_mask->hdr.recv_ack ||
+ tcp_mask->hdr.data_off ||
+ tcp_mask->hdr.rx_win ||
+ tcp_mask->hdr.cksum ||
+ tcp_mask->hdr.tcp_urp) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Only TCP flags filtering supported");
+ }
+
+ return 0;
+}
+
+const struct rte_flow_graph ixgbe_syn_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [IXGBE_SYN_NODE_START] = {
+ .name = "START",
+ },
+ [IXGBE_SYN_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [IXGBE_SYN_NODE_IPV4] = {
+ .name = "IPV4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [IXGBE_SYN_NODE_IPV6] = {
+ .name = "IPV6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [IXGBE_SYN_NODE_TCP] = {
+ .name = "TCP",
+ .type = RTE_FLOW_ITEM_TYPE_TCP,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = ixgbe_validate_syn_tcp,
+ },
+ [IXGBE_SYN_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [IXGBE_SYN_NODE_START] = {
+ .next = (const size_t[]) {
+ IXGBE_SYN_NODE_ETH,
+ IXGBE_SYN_NODE_IPV4,
+ IXGBE_SYN_NODE_IPV6,
+ IXGBE_SYN_NODE_TCP,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_SYN_NODE_ETH] = {
+ .next = (const size_t[]) {
+ IXGBE_SYN_NODE_IPV4,
+ IXGBE_SYN_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_SYN_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ IXGBE_SYN_NODE_TCP,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_SYN_NODE_IPV6] = {
+ .next = (const size_t[]) {
+ IXGBE_SYN_NODE_TCP,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_SYN_NODE_TCP] = {
+ .next = (const size_t[]) {
+ IXGBE_SYN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+static int
+ixgbe_flow_syn_ctx_parse(const struct rte_flow_action actions[],
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_syn_ctx *syn_ctx = (struct ixgbe_syn_ctx *)ctx;
+ struct ci_flow_actions parsed_actions;
+ struct ci_flow_actions_check_param ap_param = {
+ .allowed_types = (const enum rte_flow_action_type[]){
+ /* only queue is allowed here */
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .driver_ctx = ctx->dev,
+ .check = ixgbe_flow_actions_check,
+ .max_actions = 1,
+ };
+ struct ci_flow_attr_check_param attr_param = {
+ .allow_priority = true,
+ };
+ const struct rte_flow_action_queue *q_act;
+ int ret;
+
+ /* validate attributes */
+ ret = ci_flow_check_attr(attr, &attr_param, error);
+ if (ret)
+ return ret;
+
+ /* check priority */
+ if (attr->priority != 0 && attr->priority != (uint32_t)~0U) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+ attr, "Priority can be 0 or 0xFFFFFFFF");
+ }
+
+ /* parse requested actions */
+ ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
+ if (ret)
+ return ret;
+
+ q_act = parsed_actions.actions[0]->conf;
+
+ syn_ctx->syn.queue = q_act->index;
+
+ /* Support 2 priorities, the lowest or highest. */
+ syn_ctx->syn.hig_pri = attr->priority == 0 ? 0 : 1;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_syn_ctx_to_flow(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct ixgbe_syn_ctx *syn_ctx = (const struct ixgbe_syn_ctx *)ctx;
+ struct ixgbe_syn_flow *syn_flow = (struct ixgbe_syn_flow *)flow;
+
+ syn_flow->syn = syn_ctx->syn;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_syn_flow_install(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_syn_flow *syn_flow = (struct ixgbe_syn_flow *)flow;
+ struct rte_eth_dev *dev = flow->dev;
+ int ret = 0;
+
+ ret = ixgbe_syn_filter_set(dev, &syn_flow->syn, true);
+ if (ret != 0) {
+ return rte_flow_error_set(error, -ret,
+ RTE_FLOW_ERROR_TYPE_HANDLE, flow,
+ "Failed to install SYN filter");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_flow_syn_flow_uninstall(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_syn_flow *syn_flow = (struct ixgbe_syn_flow *)flow;
+ struct rte_eth_dev *dev = flow->dev;
+ int ret = 0;
+
+ ret = ixgbe_syn_filter_set(dev, &syn_flow->syn, false);
+ if (ret != 0) {
+ return rte_flow_error_set(error, -ret,
+ RTE_FLOW_ERROR_TYPE_HANDLE, flow,
+ "Failed to uninstall SYN filter");
+ }
+
+ return 0;
+}
+
+static bool
+ixgbe_flow_syn_is_available(const struct ci_flow_engine *engine __rte_unused,
+ const struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ return hw->mac.type == ixgbe_mac_82599EB ||
+ hw->mac.type == ixgbe_mac_X540 ||
+ hw->mac.type == ixgbe_mac_X550 ||
+ hw->mac.type == ixgbe_mac_X550EM_x ||
+ hw->mac.type == ixgbe_mac_X550EM_a ||
+ hw->mac.type == ixgbe_mac_E610;
+}
+
+const struct ci_flow_engine_ops ixgbe_syn_ops = {
+ .is_available = ixgbe_flow_syn_is_available,
+ .ctx_parse = ixgbe_flow_syn_ctx_parse,
+ .ctx_to_flow = ixgbe_flow_syn_ctx_to_flow,
+ .flow_install = ixgbe_flow_syn_flow_install,
+ .flow_uninstall = ixgbe_flow_syn_flow_uninstall,
+};
+
+const struct ci_flow_engine ixgbe_syn_flow_engine = {
+ .name = "ixgbe_syn",
+ .ctx_size = sizeof(struct ixgbe_syn_ctx),
+ .flow_size = sizeof(struct ixgbe_syn_flow),
+ .type = IXGBE_FLOW_ENGINE_TYPE_SYN,
+ .ops = &ixgbe_syn_ops,
+ .graph = &ixgbe_syn_graph,
+};
diff --git a/drivers/net/intel/ixgbe/meson.build b/drivers/net/intel/ixgbe/meson.build
index 54d7e87de8..bd9be0add3 100644
--- a/drivers/net/intel/ixgbe/meson.build
+++ b/drivers/net/intel/ixgbe/meson.build
@@ -12,6 +12,7 @@ sources += files(
'ixgbe_fdir.c',
'ixgbe_flow.c',
'ixgbe_flow_ethertype.c',
+ 'ixgbe_flow_syn.c',
'ixgbe_ipsec.c',
'ixgbe_pf.c',
'ixgbe_rxtx.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 07/21] net/ixgbe: reimplement L2 tunnel parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (5 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 06/21] net/ixgbe: reimplement syn parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 08/21] net/ixgbe: reimplement ntuple parser Anatoly Burakov
` (14 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Use the new flow graph API and the common parsing framework to implement
flow parser for L2 tunnel.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 217 +-------------------
drivers/net/intel/ixgbe/ixgbe_flow.h | 2 +
drivers/net/intel/ixgbe/ixgbe_flow_l2tun.c | 228 +++++++++++++++++++++
drivers/net/intel/ixgbe/meson.build | 1 +
4 files changed, 232 insertions(+), 216 deletions(-)
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_l2tun.c
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index d99a4a7f2a..313af2362b 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -62,11 +62,6 @@ struct ixgbe_fdir_rule_ele {
TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
struct ixgbe_fdir_rule filter_info;
};
-/* l2_tunnel filter list structure */
-struct ixgbe_eth_l2_tunnel_conf_ele {
- TAILQ_ENTRY(ixgbe_eth_l2_tunnel_conf_ele) entries;
- struct ixgbe_l2_tunnel_conf filter_info;
-};
/* rss filter list structure */
struct ixgbe_rss_conf_ele {
TAILQ_ENTRY(ixgbe_rss_conf_ele) entries;
@@ -80,13 +75,11 @@ struct ixgbe_flow_mem {
TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele);
TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
-TAILQ_HEAD(ixgbe_l2_tunnel_filter_list, ixgbe_eth_l2_tunnel_conf_ele);
TAILQ_HEAD(ixgbe_rss_filter_list, ixgbe_rss_conf_ele);
TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
static struct ixgbe_ntuple_filter_list filter_ntuple_list;
static struct ixgbe_fdir_rule_filter_list filter_fdir_list;
-static struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list;
static struct ixgbe_rss_filter_list filter_rss_list;
static struct ixgbe_flow_mem_list ixgbe_flow_list;
@@ -94,6 +87,7 @@ const struct ci_flow_engine_list ixgbe_flow_engine_list = {
{
&ixgbe_ethertype_flow_engine,
&ixgbe_syn_flow_engine,
+ &ixgbe_l2_tunnel_flow_engine,
},
};
@@ -677,160 +671,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
return 0;
}
-/**
- * Parse the rule to see if it is a L2 tunnel rule.
- * And get the L2 tunnel filter info BTW.
- * Only support E-tag now.
- * pattern:
- * The first not void item can be E_TAG.
- * The next not void item must be END.
- * action:
- * The first not void action should be VF or PF.
- * The next not void action should be END.
- * pattern example:
- * ITEM Spec Mask
- * E_TAG grp 0x1 0x3
- e_cid_base 0x309 0xFFF
- * END
- * other members in mask and spec should set to 0x00.
- * item->last should be NULL.
- */
-static int
-cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action *action,
- struct ixgbe_l2_tunnel_conf *filter,
- struct rte_flow_error *error)
-{
- const struct rte_flow_item *item;
- const struct rte_flow_item_e_tag *e_tag_spec;
- const struct rte_flow_item_e_tag *e_tag_mask;
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-
- /* The first not void item should be e-tag. */
- item = next_no_void_pattern(pattern, NULL);
- if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) {
- memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by L2 tunnel filter");
- return -rte_errno;
- }
-
- if (!item->spec || !item->mask) {
- memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf));
- rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by L2 tunnel filter");
- return -rte_errno;
- }
-
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- e_tag_spec = item->spec;
- e_tag_mask = item->mask;
-
- /* Only care about GRP and E cid base. */
- if (e_tag_mask->epcp_edei_in_ecid_b ||
- e_tag_mask->in_ecid_e ||
- e_tag_mask->ecid_e ||
- e_tag_mask->rsvd_grp_ecid_b != rte_cpu_to_be_16(0x3FFF)) {
- memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by L2 tunnel filter");
- return -rte_errno;
- }
-
- filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
- /**
- * grp and e_cid_base are bit fields and only use 14 bits.
- * e-tag id is taken as little endian by HW.
- */
- filter->tunnel_id = rte_be_to_cpu_16(e_tag_spec->rsvd_grp_ecid_b);
-
- /* check if the next not void item is END */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_END) {
- memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by L2 tunnel filter");
- return -rte_errno;
- }
-
- if (action->type == RTE_FLOW_ACTION_TYPE_VF) {
- const struct rte_flow_action_vf *act_vf = action->conf;
- filter->pool = act_vf->id;
- } else {
- filter->pool = pci_dev->max_vfs;
- }
-
- return 0;
-}
-
-static int
-ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct ixgbe_l2_tunnel_conf *l2_tn_filter,
- struct rte_flow_error *error)
-{
- struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ci_flow_actions parsed_actions;
- struct ci_flow_actions_check_param ap_param = {
- .allowed_types = (const enum rte_flow_action_type[]){
- /* only vf/pf is allowed here */
- RTE_FLOW_ACTION_TYPE_VF,
- RTE_FLOW_ACTION_TYPE_PF,
- RTE_FLOW_ACTION_TYPE_END
- },
- .driver_ctx = dev,
- .check = ixgbe_flow_actions_check,
- .max_actions = 1,
- };
- int ret = 0;
- const struct rte_flow_action *action;
-
- if (hw->mac.type != ixgbe_mac_X550 &&
- hw->mac.type != ixgbe_mac_X550EM_x &&
- hw->mac.type != ixgbe_mac_X550EM_a &&
- hw->mac.type != ixgbe_mac_E610) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL, "Not supported by L2 tunnel filter");
- return -rte_errno;
- }
-
- /* validate attributes */
- ret = ci_flow_check_attr(attr, NULL, error);
- if (ret)
- return ret;
-
- /* parse requested actions */
- ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
- if (ret)
- return ret;
-
- /* only one action is supported */
- if (parsed_actions.count > 1) {
- return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
- parsed_actions.actions[1],
- "Only one action can be specified at a time");
- }
- action = parsed_actions.actions[0];
-
- ret = cons_parse_l2_tn_filter(dev, pattern, action, l2_tn_filter, error);
-
- return ret;
-}
-
/* search next no void pattern and skip fuzzy */
static inline
const struct rte_flow_item *next_no_fuzzy_pattern(
@@ -2340,7 +2180,6 @@ ixgbe_filterlist_init(void)
{
TAILQ_INIT(&filter_ntuple_list);
TAILQ_INIT(&filter_fdir_list);
- TAILQ_INIT(&filter_l2_tunnel_list);
TAILQ_INIT(&filter_rss_list);
TAILQ_INIT(&ixgbe_flow_list);
}
@@ -2349,7 +2188,6 @@ void
ixgbe_filterlist_flush(void)
{
struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
- struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
struct ixgbe_rss_conf_ele *rss_filter_ptr;
@@ -2361,13 +2199,6 @@ ixgbe_filterlist_flush(void)
rte_free(ntuple_filter_ptr);
}
- while ((l2_tn_filter_ptr = TAILQ_FIRST(&filter_l2_tunnel_list))) {
- TAILQ_REMOVE(&filter_l2_tunnel_list,
- l2_tn_filter_ptr,
- entries);
- rte_free(l2_tn_filter_ptr);
- }
-
while ((fdir_rule_ptr = TAILQ_FIRST(&filter_fdir_list))) {
TAILQ_REMOVE(&filter_fdir_list,
fdir_rule_ptr,
@@ -2408,13 +2239,11 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
int ret;
struct rte_eth_ntuple_filter ntuple_filter;
struct ixgbe_fdir_rule fdir_rule;
- struct ixgbe_l2_tunnel_conf l2_tn_filter;
struct ixgbe_hw_fdir_info *fdir_info =
IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
struct ixgbe_rte_flow_rss_conf rss_conf;
struct rte_flow *flow = NULL;
struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
- struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_rss_conf_ele *rss_filter_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
@@ -2554,29 +2383,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
goto out;
}
- memset(&l2_tn_filter, 0, sizeof(struct ixgbe_l2_tunnel_conf));
- ret = ixgbe_parse_l2_tn_filter(dev, attr, pattern,
- actions, &l2_tn_filter, error);
- if (!ret) {
- ret = ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_filter, FALSE);
- if (!ret) {
- l2_tn_filter_ptr = rte_zmalloc("ixgbe_l2_tn_filter",
- sizeof(struct ixgbe_eth_l2_tunnel_conf_ele), 0);
- if (!l2_tn_filter_ptr) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- goto out;
- }
- rte_memcpy(&l2_tn_filter_ptr->filter_info,
- &l2_tn_filter,
- sizeof(struct ixgbe_l2_tunnel_conf));
- TAILQ_INSERT_TAIL(&filter_l2_tunnel_list,
- l2_tn_filter_ptr, entries);
- flow->rule = l2_tn_filter_ptr;
- flow->filter_type = RTE_ETH_FILTER_L2_TUNNEL;
- return flow;
- }
- }
-
memset(&rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf));
ret = ixgbe_parse_rss_filter(dev, attr,
actions, &rss_conf, error);
@@ -2624,7 +2430,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
{
struct ixgbe_adapter *ad = dev->data->dev_private;
struct rte_eth_ntuple_filter ntuple_filter;
- struct ixgbe_l2_tunnel_conf l2_tn_filter;
struct ixgbe_fdir_rule fdir_rule;
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
@@ -2656,12 +2461,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
if (!ret)
return 0;
- memset(&l2_tn_filter, 0, sizeof(struct ixgbe_l2_tunnel_conf));
- ret = ixgbe_parse_l2_tn_filter(dev, attr, pattern,
- actions, &l2_tn_filter, error);
- if (!ret)
- return 0;
-
memset(&rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf));
ret = ixgbe_parse_rss_filter(dev, attr,
actions, &rss_conf, error);
@@ -2681,9 +2480,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
enum rte_filter_type filter_type = pmd_flow->filter_type;
struct rte_eth_ntuple_filter ntuple_filter;
struct ixgbe_fdir_rule fdir_rule;
- struct ixgbe_l2_tunnel_conf l2_tn_filter;
struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
- struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
struct ixgbe_hw_fdir_info *fdir_info =
@@ -2733,18 +2530,6 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
fdir_info->mask_added = false;
}
break;
- case RTE_ETH_FILTER_L2_TUNNEL:
- l2_tn_filter_ptr = (struct ixgbe_eth_l2_tunnel_conf_ele *)
- pmd_flow->rule;
- rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
- sizeof(struct ixgbe_l2_tunnel_conf));
- ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
- if (!ret) {
- TAILQ_REMOVE(&filter_l2_tunnel_list,
- l2_tn_filter_ptr, entries);
- rte_free(l2_tn_filter_ptr);
- }
- break;
case RTE_ETH_FILTER_HASH:
rss_filter_ptr = (struct ixgbe_rss_conf_ele *)
pmd_flow->rule;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.h b/drivers/net/intel/ixgbe/ixgbe_flow.h
index 3a5d0299b3..4dabaca0ed 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.h
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.h
@@ -11,6 +11,7 @@
enum ixgbe_flow_engine_type {
IXGBE_FLOW_ENGINE_TYPE_ETHERTYPE = 0,
IXGBE_FLOW_ENGINE_TYPE_SYN,
+ IXGBE_FLOW_ENGINE_TYPE_L2_TUNNEL,
};
int
@@ -22,5 +23,6 @@ extern const struct ci_flow_engine_list ixgbe_flow_engine_list;
extern const struct ci_flow_engine ixgbe_ethertype_flow_engine;
extern const struct ci_flow_engine ixgbe_syn_flow_engine;
+extern const struct ci_flow_engine ixgbe_l2_tunnel_flow_engine;
#endif /* _IXGBE_FLOW_H_ */
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow_l2tun.c b/drivers/net/intel/ixgbe/ixgbe_flow_l2tun.c
new file mode 100644
index 0000000000..bde7af1b78
--- /dev/null
+++ b/drivers/net/intel/ixgbe/ixgbe_flow_l2tun.c
@@ -0,0 +1,228 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include <rte_flow.h>
+#include <rte_flow_graph.h>
+#include <rte_ether.h>
+
+#include "ixgbe_ethdev.h"
+#include "ixgbe_flow.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+#include "../common/flow_engine.h"
+
+struct ixgbe_l2_tunnel_flow {
+ struct rte_flow flow;
+ struct ixgbe_l2_tunnel_conf l2_tunnel;
+};
+
+struct ixgbe_l2_tunnel_ctx {
+ struct ci_flow_engine_ctx base;
+ struct ixgbe_l2_tunnel_conf l2_tunnel;
+};
+
+/**
+ * L2 tunnel filter graph implementation (E-TAG)
+ * Pattern: START -> E_TAG -> END
+ */
+
+enum ixgbe_l2_tunnel_node_id {
+ IXGBE_L2_TUNNEL_NODE_START = RTE_FLOW_NODE_FIRST,
+ IXGBE_L2_TUNNEL_NODE_E_TAG,
+ IXGBE_L2_TUNNEL_NODE_END,
+ IXGBE_L2_TUNNEL_NODE_MAX,
+};
+
+static int
+ixgbe_validate_l2_tunnel_e_tag(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_e_tag *e_tag_mask;
+
+ e_tag_mask = item->mask;
+
+ /* Only GRP and E-CID base supported (rsvd_grp_ecid_b field) */
+ if (e_tag_mask->epcp_edei_in_ecid_b ||
+ e_tag_mask->in_ecid_e ||
+ e_tag_mask->ecid_e ||
+ rte_be_to_cpu_16(e_tag_mask->rsvd_grp_ecid_b) != 0x3FFF) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Only GRP and E-CID base (14 bits) supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_l2_tunnel_e_tag(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_l2_tunnel_ctx *l2tun_ctx = ctx;
+ const struct rte_flow_item_e_tag *e_tag_spec = item->spec;
+
+ l2tun_ctx->l2_tunnel.l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
+ l2tun_ctx->l2_tunnel.tunnel_id = rte_be_to_cpu_16(e_tag_spec->rsvd_grp_ecid_b);
+
+ return 0;
+}
+
+const struct rte_flow_graph ixgbe_l2_tunnel_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [IXGBE_L2_TUNNEL_NODE_START] = {
+ .name = "START",
+ },
+ [IXGBE_L2_TUNNEL_NODE_E_TAG] = {
+ .name = "E_TAG",
+ .type = RTE_FLOW_ITEM_TYPE_E_TAG,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = ixgbe_validate_l2_tunnel_e_tag,
+ .process = ixgbe_process_l2_tunnel_e_tag,
+ },
+ [IXGBE_L2_TUNNEL_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [IXGBE_L2_TUNNEL_NODE_START] = {
+ .next = (const size_t[]) {
+ IXGBE_L2_TUNNEL_NODE_E_TAG,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_L2_TUNNEL_NODE_E_TAG] = {
+ .next = (const size_t[]) {
+ IXGBE_L2_TUNNEL_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+static int
+ixgbe_flow_l2_tunnel_ctx_parse(const struct rte_flow_action *actions,
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_l2_tunnel_ctx *l2tun_ctx = (struct ixgbe_l2_tunnel_ctx *)ctx;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(ctx->dev);
+ struct ci_flow_actions parsed_actions;
+ struct ci_flow_actions_check_param ap_param = {
+ .allowed_types = (const enum rte_flow_action_type[]){
+ /* only vf/pf is allowed here */
+ RTE_FLOW_ACTION_TYPE_VF,
+ RTE_FLOW_ACTION_TYPE_PF,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .driver_ctx = ctx->dev,
+ .check = ixgbe_flow_actions_check,
+ .max_actions = 1,
+ };
+ const struct rte_flow_action *action;
+ int ret;
+
+ /* validate attributes */
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret)
+ return ret;
+
+ /* parse requested actions */
+ ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
+ if (ret)
+ return ret;
+
+ action = parsed_actions.actions[0];
+
+ if (action->type == RTE_FLOW_ACTION_TYPE_VF) {
+ const struct rte_flow_action_vf *vf = action->conf;
+ l2tun_ctx->l2_tunnel.pool = vf->id;
+ } else {
+ l2tun_ctx->l2_tunnel.pool = pci_dev->max_vfs;
+ }
+
+ return ret;
+}
+
+static int
+ixgbe_flow_l2_tunnel_ctx_to_flow(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct ixgbe_l2_tunnel_ctx *l2tun_ctx = (const struct ixgbe_l2_tunnel_ctx *)ctx;
+ struct ixgbe_l2_tunnel_flow *l2tun_flow = (struct ixgbe_l2_tunnel_flow *)flow;
+
+ l2tun_flow->l2_tunnel = l2tun_ctx->l2_tunnel;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_l2_tunnel_flow_install(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_l2_tunnel_flow *l2tun_flow = (struct ixgbe_l2_tunnel_flow *)flow;
+ struct rte_eth_dev *dev = flow->dev;
+ int ret;
+
+ /* yes, false */
+ ret = ixgbe_dev_l2_tunnel_filter_add(dev, &l2tun_flow->l2_tunnel, FALSE);
+ if (ret) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to add L2 tunnel filter");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_flow_l2_tunnel_flow_uninstall(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_l2_tunnel_flow *l2tun_flow = (struct ixgbe_l2_tunnel_flow *)flow;
+ struct rte_eth_dev *dev = flow->dev;
+ int ret;
+
+ ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2tun_flow->l2_tunnel);
+ if (ret) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to remove L2 tunnel filter");
+ }
+
+ return 0;
+}
+
+static bool
+ixgbe_flow_l2_tunnel_is_available(const struct ci_flow_engine *engine __rte_unused,
+ const struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ return hw->mac.type == ixgbe_mac_X550 ||
+ hw->mac.type == ixgbe_mac_X550EM_x ||
+ hw->mac.type == ixgbe_mac_X550EM_a ||
+ hw->mac.type == ixgbe_mac_E610;
+}
+
+const struct ci_flow_engine_ops ixgbe_l2_tunnel_ops = {
+ .is_available = ixgbe_flow_l2_tunnel_is_available,
+ .ctx_parse = ixgbe_flow_l2_tunnel_ctx_parse,
+ .ctx_to_flow = ixgbe_flow_l2_tunnel_ctx_to_flow,
+ .flow_install = ixgbe_flow_l2_tunnel_flow_install,
+ .flow_uninstall = ixgbe_flow_l2_tunnel_flow_uninstall,
+};
+
+const struct ci_flow_engine ixgbe_l2_tunnel_flow_engine = {
+ .name = "ixgbe_l2_tunnel",
+ .ctx_size = sizeof(struct ixgbe_l2_tunnel_ctx),
+ .flow_size = sizeof(struct ixgbe_l2_tunnel_flow),
+ .type = IXGBE_FLOW_ENGINE_TYPE_L2_TUNNEL,
+ .ops = &ixgbe_l2_tunnel_ops,
+ .graph = &ixgbe_l2_tunnel_graph,
+};
diff --git a/drivers/net/intel/ixgbe/meson.build b/drivers/net/intel/ixgbe/meson.build
index bd9be0add3..0aaeb82a36 100644
--- a/drivers/net/intel/ixgbe/meson.build
+++ b/drivers/net/intel/ixgbe/meson.build
@@ -13,6 +13,7 @@ sources += files(
'ixgbe_flow.c',
'ixgbe_flow_ethertype.c',
'ixgbe_flow_syn.c',
+ 'ixgbe_flow_l2tun.c',
'ixgbe_ipsec.c',
'ixgbe_pf.c',
'ixgbe_rxtx.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 08/21] net/ixgbe: reimplement ntuple parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (6 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 07/21] net/ixgbe: reimplement L2 tunnel parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 09/21] net/ixgbe: reimplement security parser Anatoly Burakov
` (13 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Use the new flow graph API and the common parsing framework to implement
flow parser for ntuple.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_flow.c | 486 +-------------------
drivers/net/intel/ixgbe/ixgbe_flow.h | 2 +
drivers/net/intel/ixgbe/ixgbe_flow_ntuple.c | 483 +++++++++++++++++++
drivers/net/intel/ixgbe/meson.build | 1 +
4 files changed, 487 insertions(+), 485 deletions(-)
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_ntuple.c
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 313af2362b..f509b47733 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -48,15 +48,8 @@
#include "../common/flow_engine.h"
#include "ixgbe_flow.h"
-#define IXGBE_MIN_N_TUPLE_PRIO 1
-#define IXGBE_MAX_N_TUPLE_PRIO 7
#define IXGBE_MAX_FLX_SOURCE_OFF 62
-/* ntuple filter list structure */
-struct ixgbe_ntuple_filter_ele {
- TAILQ_ENTRY(ixgbe_ntuple_filter_ele) entries;
- struct rte_eth_ntuple_filter filter_info;
-};
/* fdir filter list structure */
struct ixgbe_fdir_rule_ele {
TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
@@ -73,12 +66,10 @@ struct ixgbe_flow_mem {
struct rte_flow *flow;
};
-TAILQ_HEAD(ixgbe_ntuple_filter_list, ixgbe_ntuple_filter_ele);
TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
TAILQ_HEAD(ixgbe_rss_filter_list, ixgbe_rss_conf_ele);
TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
-static struct ixgbe_ntuple_filter_list filter_ntuple_list;
static struct ixgbe_fdir_rule_filter_list filter_fdir_list;
static struct ixgbe_rss_filter_list filter_rss_list;
static struct ixgbe_flow_mem_list ixgbe_flow_list;
@@ -88,6 +79,7 @@ const struct ci_flow_engine_list ixgbe_flow_engine_list = {
&ixgbe_ethertype_flow_engine,
&ixgbe_syn_flow_engine,
&ixgbe_l2_tunnel_flow_engine,
+ &ixgbe_ntuple_flow_engine,
},
};
@@ -165,364 +157,6 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions,
* normally the packets should use network order.
*/
-/**
- * Parse the rule to see if it is a n-tuple rule.
- * And get the n-tuple filter info BTW.
- * pattern:
- * The first not void item can be ETH or IPV4.
- * The second not void item must be IPV4 if the first one is ETH.
- * The third not void item must be UDP or TCP.
- * The next not void item must be END.
- * action:
- * The first not void action should be QUEUE.
- * The next not void action should be END.
- * pattern example:
- * ITEM Spec Mask
- * ETH NULL NULL
- * IPV4 src_addr 192.168.1.20 0xFFFFFFFF
- * dst_addr 192.167.3.50 0xFFFFFFFF
- * next_proto_id 17 0xFF
- * UDP/TCP/ src_port 80 0xFFFF
- * SCTP dst_port 80 0xFFFF
- * END
- * other members in mask and spec should set to 0x00.
- * item->last should be NULL.
- *
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY.
- *
- */
-static int
-cons_parse_ntuple_filter(const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action_queue *q_act,
- struct rte_eth_ntuple_filter *filter,
- struct rte_flow_error *error)
-{
- const struct rte_flow_item *item;
- const struct rte_flow_item_ipv4 *ipv4_spec;
- const struct rte_flow_item_ipv4 *ipv4_mask;
- const struct rte_flow_item_tcp *tcp_spec;
- const struct rte_flow_item_tcp *tcp_mask;
- const struct rte_flow_item_udp *udp_spec;
- const struct rte_flow_item_udp *udp_mask;
- const struct rte_flow_item_sctp *sctp_spec;
- const struct rte_flow_item_sctp *sctp_mask;
- const struct rte_flow_item_eth *eth_spec;
- const struct rte_flow_item_eth *eth_mask;
- const struct rte_flow_item_vlan *vlan_spec;
- const struct rte_flow_item_vlan *vlan_mask;
- struct rte_flow_item_eth eth_null;
- struct rte_flow_item_vlan vlan_null;
-
- /* Priority must be 16-bit */
- if (attr->priority > UINT16_MAX) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, attr,
- "Priority must be 16-bit");
- }
-
- memset(ð_null, 0, sizeof(struct rte_flow_item_eth));
- memset(&vlan_null, 0, sizeof(struct rte_flow_item_vlan));
-
- /* the first not void item can be MAC or IPv4 */
- item = next_no_void_pattern(pattern, NULL);
-
- if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
- /* Skip Ethernet */
- if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
- eth_spec = item->spec;
- eth_mask = item->mask;
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error,
- EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
-
- }
- /* if the first item is MAC, the content should be NULL */
- if ((item->spec || item->mask) &&
- (memcmp(eth_spec, ð_null,
- sizeof(struct rte_flow_item_eth)) ||
- memcmp(eth_mask, ð_null,
- sizeof(struct rte_flow_item_eth)))) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
- /* check if the next not void item is IPv4 or Vlan */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
- rte_flow_error_set(error,
- EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
- }
-
- if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- vlan_spec = item->spec;
- vlan_mask = item->mask;
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error,
- EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
- /* the content should be NULL */
- if ((item->spec || item->mask) &&
- (memcmp(vlan_spec, &vlan_null,
- sizeof(struct rte_flow_item_vlan)) ||
- memcmp(vlan_mask, &vlan_null,
- sizeof(struct rte_flow_item_vlan)))) {
-
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
- /* check if the next not void item is IPv4 */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
- rte_flow_error_set(error,
- EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
- }
-
- if (item->mask) {
- /* get the IPv4 info */
- if (!item->spec || !item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid ntuple mask");
- return -rte_errno;
- }
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- ipv4_mask = item->mask;
- /**
- * Only support src & dst addresses, protocol,
- * others should be masked.
- */
- if (ipv4_mask->hdr.version_ihl ||
- ipv4_mask->hdr.type_of_service ||
- ipv4_mask->hdr.total_length ||
- ipv4_mask->hdr.packet_id ||
- ipv4_mask->hdr.fragment_offset ||
- ipv4_mask->hdr.time_to_live ||
- ipv4_mask->hdr.hdr_checksum) {
- rte_flow_error_set(error,
- EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
- if ((ipv4_mask->hdr.src_addr != 0 &&
- ipv4_mask->hdr.src_addr != UINT32_MAX) ||
- (ipv4_mask->hdr.dst_addr != 0 &&
- ipv4_mask->hdr.dst_addr != UINT32_MAX) ||
- (ipv4_mask->hdr.next_proto_id != UINT8_MAX &&
- ipv4_mask->hdr.next_proto_id != 0)) {
- rte_flow_error_set(error,
- EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
-
- filter->dst_ip_mask = ipv4_mask->hdr.dst_addr;
- filter->src_ip_mask = ipv4_mask->hdr.src_addr;
- filter->proto_mask = ipv4_mask->hdr.next_proto_id;
-
- ipv4_spec = item->spec;
- filter->dst_ip = ipv4_spec->hdr.dst_addr;
- filter->src_ip = ipv4_spec->hdr.src_addr;
- filter->proto = ipv4_spec->hdr.next_proto_id;
- }
-
- /* check if the next not void item is TCP or UDP */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
- item->type != RTE_FLOW_ITEM_TYPE_UDP &&
- item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
- item->type != RTE_FLOW_ITEM_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
-
- if ((item->type != RTE_FLOW_ITEM_TYPE_END) &&
- (!item->spec && !item->mask)) {
- goto action;
- }
-
- /* get the TCP/UDP/SCTP info */
- if (item->type != RTE_FLOW_ITEM_TYPE_END &&
- (!item->spec || !item->mask)) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Invalid ntuple mask");
- return -rte_errno;
- }
-
- /*Not supported last point for range*/
- if (item->last) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
-
- }
-
- if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
- tcp_mask = item->mask;
-
- /**
- * Only support src & dst ports, tcp flags,
- * others should be masked.
- */
- if (tcp_mask->hdr.sent_seq ||
- tcp_mask->hdr.recv_ack ||
- tcp_mask->hdr.data_off ||
- tcp_mask->hdr.rx_win ||
- tcp_mask->hdr.cksum ||
- tcp_mask->hdr.tcp_urp) {
- memset(filter, 0,
- sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
- if ((tcp_mask->hdr.src_port != 0 &&
- tcp_mask->hdr.src_port != UINT16_MAX) ||
- (tcp_mask->hdr.dst_port != 0 &&
- tcp_mask->hdr.dst_port != UINT16_MAX)) {
- rte_flow_error_set(error,
- EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
-
- filter->dst_port_mask = tcp_mask->hdr.dst_port;
- filter->src_port_mask = tcp_mask->hdr.src_port;
- if (tcp_mask->hdr.tcp_flags == 0xFF) {
- filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG;
- } else if (!tcp_mask->hdr.tcp_flags) {
- filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG;
- } else {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
-
- tcp_spec = item->spec;
- filter->dst_port = tcp_spec->hdr.dst_port;
- filter->src_port = tcp_spec->hdr.src_port;
- filter->tcp_flags = tcp_spec->hdr.tcp_flags;
- } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
- udp_mask = item->mask;
-
- /**
- * Only support src & dst ports,
- * others should be masked.
- */
- if (udp_mask->hdr.dgram_len ||
- udp_mask->hdr.dgram_cksum) {
- memset(filter, 0,
- sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
- if ((udp_mask->hdr.src_port != 0 &&
- udp_mask->hdr.src_port != UINT16_MAX) ||
- (udp_mask->hdr.dst_port != 0 &&
- udp_mask->hdr.dst_port != UINT16_MAX)) {
- rte_flow_error_set(error,
- EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
-
- filter->dst_port_mask = udp_mask->hdr.dst_port;
- filter->src_port_mask = udp_mask->hdr.src_port;
-
- udp_spec = item->spec;
- filter->dst_port = udp_spec->hdr.dst_port;
- filter->src_port = udp_spec->hdr.src_port;
- } else if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) {
- sctp_mask = item->mask;
-
- /**
- * Only support src & dst ports,
- * others should be masked.
- */
- if (sctp_mask->hdr.tag ||
- sctp_mask->hdr.cksum) {
- memset(filter, 0,
- sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
-
- filter->dst_port_mask = sctp_mask->hdr.dst_port;
- filter->src_port_mask = sctp_mask->hdr.src_port;
-
- sctp_spec = item->spec;
- filter->dst_port = sctp_spec->hdr.dst_port;
- filter->src_port = sctp_spec->hdr.src_port;
- } else {
- goto action;
- }
-
- /* check if the next not void item is END */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_END) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by ntuple filter");
- return -rte_errno;
- }
-
-action:
-
- filter->queue = q_act->index;
-
- filter->priority = (uint16_t)attr->priority;
- if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO || attr->priority > IXGBE_MAX_N_TUPLE_PRIO)
- filter->priority = 1;
-
- return 0;
-}
-
static int
ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
@@ -611,66 +245,6 @@ ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr
return 0;
}
-/* a specific function for ixgbe because the flags is specific */
-static int
-ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_eth_ntuple_filter *filter,
- struct rte_flow_error *error)
-{
- struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ci_flow_attr_check_param attr_param = {
- .allow_priority = true,
- };
- struct ci_flow_actions parsed_actions;
- struct ci_flow_actions_check_param ap_param = {
- .allowed_types = (const enum rte_flow_action_type[]){
- /* only queue is allowed here */
- RTE_FLOW_ACTION_TYPE_QUEUE,
- RTE_FLOW_ACTION_TYPE_END
- },
- .driver_ctx = dev,
- .check = ixgbe_flow_actions_check,
- .max_actions = 1,
- };
- const struct rte_flow_action *action;
- int ret;
-
- if (hw->mac.type != ixgbe_mac_82599EB &&
- hw->mac.type != ixgbe_mac_X540)
- return -ENOTSUP;
-
- /* validate attributes */
- ret = ci_flow_check_attr(attr, &attr_param, error);
- if (ret)
- return ret;
-
- /* parse requested actions */
- ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
- if (ret)
- return ret;
- action = parsed_actions.actions[0];
-
- ret = cons_parse_ntuple_filter(attr, pattern, action->conf, filter, error);
- if (ret)
- return ret;
-
- /* Ixgbe doesn't support tcp flags. */
- if (filter->flags & RTE_NTUPLE_FLAGS_TCP_FLAG) {
- memset(filter, 0, sizeof(struct rte_eth_ntuple_filter));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL, "Not supported by ntuple filter");
- return -rte_errno;
- }
-
- /* fixed value for ixgbe */
- filter->flags = RTE_5TUPLE_FLAGS;
- return 0;
-}
-
/* search next no void pattern and skip fuzzy */
static inline
const struct rte_flow_item *next_no_fuzzy_pattern(
@@ -2178,7 +1752,6 @@ ixgbe_clear_rss_filter(struct rte_eth_dev *dev)
void
ixgbe_filterlist_init(void)
{
- TAILQ_INIT(&filter_ntuple_list);
TAILQ_INIT(&filter_fdir_list);
TAILQ_INIT(&filter_rss_list);
TAILQ_INIT(&ixgbe_flow_list);
@@ -2187,18 +1760,10 @@ ixgbe_filterlist_init(void)
void
ixgbe_filterlist_flush(void)
{
- struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
struct ixgbe_rss_conf_ele *rss_filter_ptr;
- while ((ntuple_filter_ptr = TAILQ_FIRST(&filter_ntuple_list))) {
- TAILQ_REMOVE(&filter_ntuple_list,
- ntuple_filter_ptr,
- entries);
- rte_free(ntuple_filter_ptr);
- }
-
while ((fdir_rule_ptr = TAILQ_FIRST(&filter_fdir_list))) {
TAILQ_REMOVE(&filter_fdir_list,
fdir_rule_ptr,
@@ -2237,13 +1802,11 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
{
struct ixgbe_adapter *ad = dev->data->dev_private;
int ret;
- struct rte_eth_ntuple_filter ntuple_filter;
struct ixgbe_fdir_rule fdir_rule;
struct ixgbe_hw_fdir_info *fdir_info =
IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
struct ixgbe_rte_flow_rss_conf rss_conf;
struct rte_flow *flow = NULL;
- struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_rss_conf_ele *rss_filter_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
@@ -2283,31 +1846,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
return flow;
}
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
-
- if (!ret) {
- ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE);
- if (!ret) {
- ntuple_filter_ptr = rte_zmalloc("ixgbe_ntuple_filter",
- sizeof(struct ixgbe_ntuple_filter_ele), 0);
- if (!ntuple_filter_ptr) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- goto out;
- }
- rte_memcpy(&ntuple_filter_ptr->filter_info,
- &ntuple_filter,
- sizeof(struct rte_eth_ntuple_filter));
- TAILQ_INSERT_TAIL(&filter_ntuple_list,
- ntuple_filter_ptr, entries);
- flow->rule = ntuple_filter_ptr;
- flow->filter_type = RTE_ETH_FILTER_NTUPLE;
- return flow;
- }
- goto out;
- }
-
memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
ret = ixgbe_parse_fdir_filter(dev, attr, pattern,
actions, &fdir_rule, error);
@@ -2429,7 +1967,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct ixgbe_adapter *ad = dev->data->dev_private;
- struct rte_eth_ntuple_filter ntuple_filter;
struct ixgbe_fdir_rule fdir_rule;
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
@@ -2449,12 +1986,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
if (!ret)
return 0;
- memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_parse_ntuple_filter(dev, attr, pattern,
- actions, &ntuple_filter, error);
- if (!ret)
- return 0;
-
memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
ret = ixgbe_parse_fdir_filter(dev, attr, pattern,
actions, &fdir_rule, error);
@@ -2478,9 +2009,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
int ret;
struct rte_flow *pmd_flow = flow;
enum rte_filter_type filter_type = pmd_flow->filter_type;
- struct rte_eth_ntuple_filter ntuple_filter;
struct ixgbe_fdir_rule fdir_rule;
- struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr;
struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
struct ixgbe_hw_fdir_info *fdir_info =
@@ -2503,19 +2032,6 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
}
switch (filter_type) {
- case RTE_ETH_FILTER_NTUPLE:
- ntuple_filter_ptr = (struct ixgbe_ntuple_filter_ele *)
- pmd_flow->rule;
- rte_memcpy(&ntuple_filter,
- &ntuple_filter_ptr->filter_info,
- sizeof(struct rte_eth_ntuple_filter));
- ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
- if (!ret) {
- TAILQ_REMOVE(&filter_ntuple_list,
- ntuple_filter_ptr, entries);
- rte_free(ntuple_filter_ptr);
- }
- break;
case RTE_ETH_FILTER_FDIR:
fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
rte_memcpy(&fdir_rule,
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.h b/drivers/net/intel/ixgbe/ixgbe_flow.h
index 4dabaca0ed..c1df74c0e7 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.h
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.h
@@ -12,6 +12,7 @@ enum ixgbe_flow_engine_type {
IXGBE_FLOW_ENGINE_TYPE_ETHERTYPE = 0,
IXGBE_FLOW_ENGINE_TYPE_SYN,
IXGBE_FLOW_ENGINE_TYPE_L2_TUNNEL,
+ IXGBE_FLOW_ENGINE_TYPE_NTUPLE,
};
int
@@ -24,5 +25,6 @@ extern const struct ci_flow_engine_list ixgbe_flow_engine_list;
extern const struct ci_flow_engine ixgbe_ethertype_flow_engine;
extern const struct ci_flow_engine ixgbe_syn_flow_engine;
extern const struct ci_flow_engine ixgbe_l2_tunnel_flow_engine;
+extern const struct ci_flow_engine ixgbe_ntuple_flow_engine;
#endif /* _IXGBE_FLOW_H_ */
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow_ntuple.c b/drivers/net/intel/ixgbe/ixgbe_flow_ntuple.c
new file mode 100644
index 0000000000..6c2b1cc9b1
--- /dev/null
+++ b/drivers/net/intel/ixgbe/ixgbe_flow_ntuple.c
@@ -0,0 +1,483 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include <rte_flow.h>
+#include <rte_flow_graph.h>
+#include <rte_ether.h>
+
+#include "ixgbe_ethdev.h"
+#include "ixgbe_flow.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+#include "../common/flow_engine.h"
+
+#define IXGBE_MIN_N_TUPLE_PRIO 1
+#define IXGBE_MAX_N_TUPLE_PRIO 7
+
+struct ixgbe_ntuple_flow {
+ struct rte_flow flow;
+ struct rte_eth_ntuple_filter ntuple;
+};
+
+struct ixgbe_ntuple_ctx {
+ struct ci_flow_engine_ctx base;
+ struct rte_eth_ntuple_filter ntuple;
+};
+
+/**
+ * Ntuple filter graph implementation
+ * Pattern: START -> [ETH] -> [VLAN] -> IPV4 -> [TCP|UDP|SCTP] -> END
+ */
+
+enum ixgbe_ntuple_node_id {
+ IXGBE_NTUPLE_NODE_START = RTE_FLOW_NODE_FIRST,
+ IXGBE_NTUPLE_NODE_ETH,
+ IXGBE_NTUPLE_NODE_VLAN,
+ IXGBE_NTUPLE_NODE_IPV4,
+ IXGBE_NTUPLE_NODE_TCP,
+ IXGBE_NTUPLE_NODE_UDP,
+ IXGBE_NTUPLE_NODE_SCTP,
+ IXGBE_NTUPLE_NODE_END,
+ IXGBE_NTUPLE_NODE_MAX,
+};
+
+static int
+ixgbe_validate_ntuple_ipv4(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ipv4 *ipv4_mask;
+
+ ipv4_mask = item->mask;
+
+ /* Only src/dst addresses and protocol supported */
+ if (ipv4_mask->hdr.version_ihl ||
+ ipv4_mask->hdr.type_of_service ||
+ ipv4_mask->hdr.total_length ||
+ ipv4_mask->hdr.packet_id ||
+ ipv4_mask->hdr.fragment_offset ||
+ ipv4_mask->hdr.time_to_live ||
+ ipv4_mask->hdr.hdr_checksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Only src/dst IP and protocol supported");
+ }
+
+ /* Masks must be 0 or all-ones */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&ipv4_mask->hdr.src_addr) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&ipv4_mask->hdr.dst_addr) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&ipv4_mask->hdr.next_proto_id)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Partial masks not supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_ntuple_ipv4(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_ntuple_ctx *ntuple_ctx = ctx;
+ const struct rte_flow_item_ipv4 *ipv4_spec = item->spec;
+ const struct rte_flow_item_ipv4 *ipv4_mask = item->mask;
+
+ ntuple_ctx->ntuple.dst_ip = ipv4_spec->hdr.dst_addr;
+ ntuple_ctx->ntuple.src_ip = ipv4_spec->hdr.src_addr;
+ ntuple_ctx->ntuple.proto = ipv4_spec->hdr.next_proto_id;
+
+ ntuple_ctx->ntuple.dst_ip_mask = ipv4_mask->hdr.dst_addr;
+ ntuple_ctx->ntuple.src_ip_mask = ipv4_mask->hdr.src_addr;
+ ntuple_ctx->ntuple.proto_mask = ipv4_mask->hdr.next_proto_id;
+
+ return 0;
+}
+
+static int
+ixgbe_validate_ntuple_tcp(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_tcp *tcp_mask;
+
+ tcp_mask = item->mask;
+
+ /* Only src/dst ports and tcp_flags supported */
+ if (tcp_mask->hdr.sent_seq ||
+ tcp_mask->hdr.recv_ack ||
+ tcp_mask->hdr.data_off ||
+ tcp_mask->hdr.rx_win ||
+ tcp_mask->hdr.cksum ||
+ tcp_mask->hdr.tcp_urp) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Only src/dst ports and flags supported");
+ }
+
+ /* Port masks must be 0 or all-ones */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&tcp_mask->hdr.src_port) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&tcp_mask->hdr.dst_port)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Partial port masks not supported");
+ }
+
+ /* TCP flags not supported by hardware */
+ if (!CI_FIELD_IS_ZERO(&tcp_mask->hdr.tcp_flags)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "TCP flags filtering not supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_ntuple_tcp(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_ntuple_ctx *ntuple_ctx = ctx;
+ const struct rte_flow_item_tcp *tcp_spec = item->spec;
+ const struct rte_flow_item_tcp *tcp_mask = item->mask;
+
+ ntuple_ctx->ntuple.dst_port = tcp_spec->hdr.dst_port;
+ ntuple_ctx->ntuple.src_port = tcp_spec->hdr.src_port;
+
+ ntuple_ctx->ntuple.dst_port_mask = tcp_mask->hdr.dst_port;
+ ntuple_ctx->ntuple.src_port_mask = tcp_mask->hdr.src_port;
+
+ return 0;
+}
+
+static int
+ixgbe_validate_ntuple_udp(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_udp *udp_mask;
+
+ udp_mask = item->mask;
+
+ /* Only src/dst ports supported */
+ if (udp_mask->hdr.dgram_len ||
+ udp_mask->hdr.dgram_cksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Only src/dst ports supported");
+ }
+
+ /* Port masks must be 0 or all-ones */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&udp_mask->hdr.src_port) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&udp_mask->hdr.dst_port)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Partial port masks not supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_ntuple_udp(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_ntuple_ctx *ntuple_ctx = ctx;
+ const struct rte_flow_item_udp *udp_spec = item->spec;
+ const struct rte_flow_item_udp *udp_mask = item->mask;
+
+ ntuple_ctx->ntuple.dst_port = udp_spec->hdr.dst_port;
+ ntuple_ctx->ntuple.src_port = udp_spec->hdr.src_port;
+
+ ntuple_ctx->ntuple.dst_port_mask = udp_mask->hdr.dst_port;
+ ntuple_ctx->ntuple.src_port_mask = udp_mask->hdr.src_port;
+
+ return 0;
+}
+
+static int
+ixgbe_validate_ntuple_sctp(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_sctp *sctp_mask;
+
+ sctp_mask = item->mask;
+
+ /* Only src/dst ports supported */
+ if (sctp_mask->hdr.tag ||
+ sctp_mask->hdr.cksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Only src/dst ports supported");
+ }
+
+ /* Port masks must be 0 or all-ones */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&sctp_mask->hdr.src_port) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&sctp_mask->hdr.dst_port)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Partial port masks not supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_ntuple_sctp(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_ntuple_ctx *ntuple_ctx = ctx;
+ const struct rte_flow_item_sctp *sctp_spec = item->spec;
+ const struct rte_flow_item_sctp *sctp_mask = item->mask;
+
+ ntuple_ctx->ntuple.dst_port = sctp_spec->hdr.dst_port;
+ ntuple_ctx->ntuple.src_port = sctp_spec->hdr.src_port;
+
+ ntuple_ctx->ntuple.dst_port_mask = sctp_mask->hdr.dst_port;
+ ntuple_ctx->ntuple.src_port_mask = sctp_mask->hdr.src_port;
+
+ return 0;
+}
+
+const struct rte_flow_graph ixgbe_ntuple_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [IXGBE_NTUPLE_NODE_START] = {
+ .name = "START",
+ },
+ [IXGBE_NTUPLE_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [IXGBE_NTUPLE_NODE_VLAN] = {
+ .name = "VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [IXGBE_NTUPLE_NODE_IPV4] = {
+ .name = "IPV4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = ixgbe_validate_ntuple_ipv4,
+ .process = ixgbe_process_ntuple_ipv4,
+ },
+ [IXGBE_NTUPLE_NODE_TCP] = {
+ .name = "TCP",
+ .type = RTE_FLOW_ITEM_TYPE_TCP,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = ixgbe_validate_ntuple_tcp,
+ .process = ixgbe_process_ntuple_tcp,
+ },
+ [IXGBE_NTUPLE_NODE_UDP] = {
+ .name = "UDP",
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = ixgbe_validate_ntuple_udp,
+ .process = ixgbe_process_ntuple_udp,
+ },
+ [IXGBE_NTUPLE_NODE_SCTP] = {
+ .name = "SCTP",
+ .type = RTE_FLOW_ITEM_TYPE_SCTP,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = ixgbe_validate_ntuple_sctp,
+ .process = ixgbe_process_ntuple_sctp,
+ },
+ [IXGBE_NTUPLE_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [IXGBE_NTUPLE_NODE_START] = {
+ .next = (const size_t[]) {
+ IXGBE_NTUPLE_NODE_ETH,
+ IXGBE_NTUPLE_NODE_IPV4,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_NTUPLE_NODE_ETH] = {
+ .next = (const size_t[]) {
+ IXGBE_NTUPLE_NODE_VLAN,
+ IXGBE_NTUPLE_NODE_IPV4,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_NTUPLE_NODE_VLAN] = {
+ .next = (const size_t[]) {
+ IXGBE_NTUPLE_NODE_IPV4,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_NTUPLE_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ IXGBE_NTUPLE_NODE_TCP,
+ IXGBE_NTUPLE_NODE_UDP,
+ IXGBE_NTUPLE_NODE_SCTP,
+ IXGBE_NTUPLE_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_NTUPLE_NODE_TCP] = {
+ .next = (const size_t[]) {
+ IXGBE_NTUPLE_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_NTUPLE_NODE_UDP] = {
+ .next = (const size_t[]) {
+ IXGBE_NTUPLE_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_NTUPLE_NODE_SCTP] = {
+ .next = (const size_t[]) {
+ IXGBE_NTUPLE_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+static int
+ixgbe_flow_ntuple_ctx_parse(const struct rte_flow_action *actions,
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_ntuple_ctx *ntuple_ctx = (struct ixgbe_ntuple_ctx *)ctx;
+ struct ci_flow_attr_check_param attr_param = {
+ .allow_priority = true,
+ };
+ struct ci_flow_actions parsed_actions;
+ struct ci_flow_actions_check_param ap_param = {
+ .allowed_types = (const enum rte_flow_action_type[]){
+ /* only queue is allowed here */
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .driver_ctx = ctx->dev,
+ .check = ixgbe_flow_actions_check,
+ .max_actions = 1,
+ };
+ const struct rte_flow_action_queue *q_act;
+ int ret;
+
+ /* validate attributes */
+ ret = ci_flow_check_attr(attr, &attr_param, error);
+ if (ret)
+ return ret;
+
+ /* Priority must be 16-bit */
+ if (attr->priority > UINT16_MAX) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, attr,
+ "Priority must be 16-bit");
+ }
+
+ /* parse requested actions */
+ ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
+ if (ret)
+ return ret;
+
+ q_act = (const struct rte_flow_action_queue *)parsed_actions.actions[0]->conf;
+
+ ntuple_ctx->ntuple.queue = q_act->index;
+
+ ntuple_ctx->ntuple.priority = (uint16_t)attr->priority;
+
+ /* clamp priority */
+ /* TODO: check if weird clamping of >7 to 1 is a bug */
+ if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO || attr->priority > IXGBE_MAX_N_TUPLE_PRIO)
+ ntuple_ctx->ntuple.priority = 1;
+
+ /* fixed value for ixgbe */
+ ntuple_ctx->ntuple.flags = RTE_5TUPLE_FLAGS;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_ntuple_ctx_to_flow(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct ixgbe_ntuple_ctx *ntuple_ctx = (const struct ixgbe_ntuple_ctx *)ctx;
+ struct ixgbe_ntuple_flow *ntuple_flow = (struct ixgbe_ntuple_flow *)flow;
+
+ ntuple_flow->ntuple = ntuple_ctx->ntuple;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_ntuple_flow_install(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_ntuple_flow *ntuple_flow = (struct ixgbe_ntuple_flow *)flow;
+ struct rte_eth_dev *dev = flow->dev;
+ int ret;
+
+ ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_flow->ntuple, TRUE);
+ if (ret) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to add L2 tunnel filter");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_flow_ntuple_flow_uninstall(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_ntuple_flow *ntuple_flow = (struct ixgbe_ntuple_flow *)flow;
+ struct rte_eth_dev *dev = flow->dev;
+ int ret;
+
+ ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_flow->ntuple, FALSE);
+ if (ret) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to add L2 tunnel filter");
+ }
+
+ return 0;
+}
+
+static bool
+ixgbe_flow_ntuple_is_available(const struct ci_flow_engine *engine __rte_unused,
+ const struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ return hw->mac.type == ixgbe_mac_82599EB ||
+ hw->mac.type == ixgbe_mac_X540 ||
+ hw->mac.type == ixgbe_mac_X550 ||
+ hw->mac.type == ixgbe_mac_X550EM_x ||
+ hw->mac.type == ixgbe_mac_X550EM_a ||
+ hw->mac.type == ixgbe_mac_E610;
+}
+
+const struct ci_flow_engine_ops ixgbe_ntuple_ops = {
+ .is_available = ixgbe_flow_ntuple_is_available,
+ .ctx_parse = ixgbe_flow_ntuple_ctx_parse,
+ .ctx_to_flow = ixgbe_flow_ntuple_ctx_to_flow,
+ .flow_install = ixgbe_flow_ntuple_flow_install,
+ .flow_uninstall = ixgbe_flow_ntuple_flow_uninstall,
+};
+
+const struct ci_flow_engine ixgbe_ntuple_flow_engine = {
+ .name = "ixgbe_ntuple",
+ .ctx_size = sizeof(struct ixgbe_ntuple_ctx),
+ .flow_size = sizeof(struct ixgbe_ntuple_flow),
+ .type = IXGBE_FLOW_ENGINE_TYPE_NTUPLE,
+ .ops = &ixgbe_ntuple_ops,
+ .graph = &ixgbe_ntuple_graph,
+};
diff --git a/drivers/net/intel/ixgbe/meson.build b/drivers/net/intel/ixgbe/meson.build
index 0aaeb82a36..f3052daf4f 100644
--- a/drivers/net/intel/ixgbe/meson.build
+++ b/drivers/net/intel/ixgbe/meson.build
@@ -14,6 +14,7 @@ sources += files(
'ixgbe_flow_ethertype.c',
'ixgbe_flow_syn.c',
'ixgbe_flow_l2tun.c',
+ 'ixgbe_flow_ntuple.c',
'ixgbe_ipsec.c',
'ixgbe_pf.c',
'ixgbe_rxtx.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 09/21] net/ixgbe: reimplement security parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (7 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 08/21] net/ixgbe: reimplement ntuple parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 10/21] net/ixgbe: reimplement FDIR parser Anatoly Burakov
` (12 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Use the new flow graph API and common flow engine infrastructure to
implement flow parser for security filter. As a result, flow item checks
have become more stringent:
- Mask is now explicitly validated to not have unsupported items in it,
when previously they were ignored
- Mask is also validated to mask src/dst addresses, as otherwise it is
inconsistent with rte_flow API
Previously, security parser was a special case, now it is a first class
citizen.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 2 -
drivers/net/intel/ixgbe/ixgbe_flow.c | 112 +------
drivers/net/intel/ixgbe/ixgbe_flow.h | 2 +
drivers/net/intel/ixgbe/ixgbe_flow_security.c | 297 ++++++++++++++++++
drivers/net/intel/ixgbe/meson.build | 1 +
5 files changed, 301 insertions(+), 113 deletions(-)
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_security.c
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index eaeeb35dea..ccfe23c233 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -348,8 +348,6 @@ struct ixgbe_l2_tn_info {
struct rte_flow {
struct ci_flow flow;
enum rte_filter_type filter_type;
- /* security flows are not rte_filter_type */
- bool is_security;
void *rule;
};
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index f509b47733..74ddc699fa 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -80,6 +80,7 @@ const struct ci_flow_engine_list ixgbe_flow_engine_list = {
&ixgbe_syn_flow_engine,
&ixgbe_l2_tunnel_flow_engine,
&ixgbe_ntuple_flow_engine,
+ &ixgbe_security_flow_engine,
},
};
@@ -157,94 +158,6 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions,
* normally the packets should use network order.
*/
-static int
-ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[], const struct rte_flow_action actions[],
- struct rte_flow_error *error)
-{
- struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- const struct rte_flow_action_security *security;
- struct rte_security_session *session;
- const struct rte_flow_item *item;
- struct ci_flow_actions parsed_actions;
- struct ci_flow_actions_check_param ap_param = {
- .allowed_types = (const enum rte_flow_action_type[]){
- /* only security is allowed here */
- RTE_FLOW_ACTION_TYPE_SECURITY,
- RTE_FLOW_ACTION_TYPE_END
- },
- .max_actions = 1,
- };
- const struct rte_flow_action *action;
- struct ip_spec spec;
- int ret;
-
- if (hw->mac.type != ixgbe_mac_82599EB &&
- hw->mac.type != ixgbe_mac_X540 &&
- hw->mac.type != ixgbe_mac_X550 &&
- hw->mac.type != ixgbe_mac_X550EM_x &&
- hw->mac.type != ixgbe_mac_X550EM_a &&
- hw->mac.type != ixgbe_mac_E610)
- return -ENOTSUP;
-
- /* validate attributes */
- ret = ci_flow_check_attr(attr, NULL, error);
- if (ret)
- return ret;
-
- /* parse requested actions */
- ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
- if (ret)
- return ret;
-
- action = parsed_actions.actions[0];
- security = action->conf;
-
- /* get the IP pattern*/
- item = next_no_void_pattern(pattern, NULL);
- while (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- if (item->last || item->type == RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "IP pattern missing.");
- return -rte_errno;
- }
- item = next_no_void_pattern(pattern, item);
- }
- if (item->spec == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_SPEC, item,
- "NULL IP pattern.");
- return -rte_errno;
- }
- spec.is_ipv6 = item->type == RTE_FLOW_ITEM_TYPE_IPV6;
- if (spec.is_ipv6) {
- const struct rte_flow_item_ipv6 *ipv6 = item->spec;
- spec.spec.ipv6 = *ipv6;
- } else {
- const struct rte_flow_item_ipv4 *ipv4 = item->spec;
- spec.spec.ipv4 = *ipv4;
- }
-
- /*
- * we get pointer to security session from security action, which is
- * const. however, we do need to act on the session, so either we do
- * some kind of pointer based lookup to get session pointer internally
- * (which quickly gets unwieldy for lots of flows case), or we simply
- * cast away constness. the latter path was chosen.
- */
- session = RTE_CAST_PTR(struct rte_security_session *, security->security_session);
- ret = ixgbe_crypto_add_ingress_sa_from_flow(session, &spec);
- if (ret) {
- rte_flow_error_set(error, -ret,
- RTE_FLOW_ERROR_TYPE_ACTION, action,
- "Failed to add security session.");
- return -rte_errno;
- }
- return 0;
-}
-
/* search next no void pattern and skip fuzzy */
static inline
const struct rte_flow_item *next_no_fuzzy_pattern(
@@ -1837,15 +1750,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
- if (!ret) {
- flow->is_security = true;
- return flow;
- }
-
memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
ret = ixgbe_parse_fdir_filter(dev, attr, pattern,
actions, &fdir_rule, error);
@@ -1979,13 +1883,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
/* fall back to legacy engines */
- /**
- * Special case for flow action type RTE_FLOW_ACTION_TYPE_SECURITY
- */
- ret = ixgbe_parse_security_filter(dev, attr, pattern, actions, error);
- if (!ret)
- return 0;
-
memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
ret = ixgbe_parse_fdir_filter(dev, attr, pattern,
actions, &fdir_rule, error);
@@ -2025,12 +1922,6 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
/* fall back to legacy engines */
- /* Special case for SECURITY flows */
- if (flow->is_security) {
- ret = 0;
- goto free;
- }
-
switch (filter_type) {
case RTE_ETH_FILTER_FDIR:
fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
@@ -2071,7 +1962,6 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
return ret;
}
-free:
TAILQ_FOREACH(ixgbe_flow_mem_ptr, &ixgbe_flow_list, entries) {
if (ixgbe_flow_mem_ptr->flow == pmd_flow) {
TAILQ_REMOVE(&ixgbe_flow_list,
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.h b/drivers/net/intel/ixgbe/ixgbe_flow.h
index c1df74c0e7..daff23e227 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.h
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.h
@@ -13,6 +13,7 @@ enum ixgbe_flow_engine_type {
IXGBE_FLOW_ENGINE_TYPE_SYN,
IXGBE_FLOW_ENGINE_TYPE_L2_TUNNEL,
IXGBE_FLOW_ENGINE_TYPE_NTUPLE,
+ IXGBE_FLOW_ENGINE_TYPE_SECURITY,
};
int
@@ -26,5 +27,6 @@ extern const struct ci_flow_engine ixgbe_ethertype_flow_engine;
extern const struct ci_flow_engine ixgbe_syn_flow_engine;
extern const struct ci_flow_engine ixgbe_l2_tunnel_flow_engine;
extern const struct ci_flow_engine ixgbe_ntuple_flow_engine;
+extern const struct ci_flow_engine ixgbe_security_flow_engine;
#endif /* _IXGBE_FLOW_H_ */
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow_security.c b/drivers/net/intel/ixgbe/ixgbe_flow_security.c
new file mode 100644
index 0000000000..c22cc394e3
--- /dev/null
+++ b/drivers/net/intel/ixgbe/ixgbe_flow_security.c
@@ -0,0 +1,297 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include <rte_common.h>
+#include <rte_flow.h>
+#include <rte_flow_graph.h>
+#include <rte_ether.h>
+#include <rte_security_driver.h>
+
+#include "ixgbe_ethdev.h"
+#include "ixgbe_flow.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+#include "../common/flow_engine.h"
+
+struct ixgbe_security_filter {
+ struct ip_spec spec;
+ struct rte_security_session *session;
+};
+
+struct ixgbe_security_flow {
+ struct rte_flow flow;
+ struct ixgbe_security_filter security;
+};
+
+struct ixgbe_security_ctx {
+ struct ci_flow_engine_ctx base;
+ struct ixgbe_security_filter security;
+};
+
+/**
+ * Ntuple security filter graph implementation
+ * Pattern: START -> IPV4 | IPV6 -> END
+ */
+
+enum ixgbe_security_node_id {
+ IXGBE_SECURITY_NODE_START = RTE_FLOW_NODE_FIRST,
+ IXGBE_SECURITY_NODE_IPV4,
+ IXGBE_SECURITY_NODE_IPV6,
+ IXGBE_SECURITY_NODE_END,
+ IXGBE_SECURITY_NODE_MAX,
+};
+
+static int
+ixgbe_validate_security_ipv4(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ipv4 *ipv4_mask = item->mask;
+
+ /* only src/dst addresses are supported */
+ if (ipv4_mask->hdr.version_ihl ||
+ ipv4_mask->hdr.type_of_service ||
+ ipv4_mask->hdr.total_length ||
+ ipv4_mask->hdr.packet_id ||
+ ipv4_mask->hdr.fragment_offset ||
+ ipv4_mask->hdr.next_proto_id ||
+ ipv4_mask->hdr.time_to_live ||
+ ipv4_mask->hdr.hdr_checksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv4 mask");
+ }
+
+ /* both src/dst addresses must be fully masked */
+ if (!CI_FIELD_IS_MASKED(&ipv4_mask->hdr.src_addr) ||
+ !CI_FIELD_IS_MASKED(&ipv4_mask->hdr.dst_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv4 mask");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_security_ipv4(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_security_ctx *sec_ctx = (struct ixgbe_security_ctx *)ctx;
+ const struct rte_flow_item_ipv4 *ipv4_spec = item->spec;
+
+ /* copy entire spec */
+ sec_ctx->security.spec.spec.ipv4 = *ipv4_spec;
+ sec_ctx->security.spec.is_ipv6 = false;
+
+ return 0;
+}
+
+static int
+ixgbe_validate_security_ipv6(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ipv6 *ipv6_mask = item->mask;
+
+ /* only src/dst addresses are supported */
+ if (ipv6_mask->hdr.vtc_flow ||
+ ipv6_mask->hdr.payload_len ||
+ ipv6_mask->hdr.proto ||
+ ipv6_mask->hdr.hop_limits) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv6 mask");
+ }
+ /* both src/dst addresses must be fully masked */
+ if (!CI_FIELD_IS_MASKED(&ipv6_mask->hdr.src_addr) ||
+ !CI_FIELD_IS_MASKED(&ipv6_mask->hdr.dst_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv6 mask");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_security_ipv6(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_security_ctx *sec_ctx = (struct ixgbe_security_ctx *)ctx;
+ const struct rte_flow_item_ipv6 *ipv6_spec = item->spec;
+
+ /* copy entire spec */
+ sec_ctx->security.spec.spec.ipv6 = *ipv6_spec;
+ sec_ctx->security.spec.is_ipv6 = true;
+
+ return 0;
+}
+
+const struct rte_flow_graph ixgbe_security_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [IXGBE_SECURITY_NODE_START] = {
+ .name = "START",
+ },
+ [IXGBE_SECURITY_NODE_IPV4] = {
+ .name = "IPV4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = ixgbe_validate_security_ipv4,
+ .process = ixgbe_process_security_ipv4,
+ },
+ [IXGBE_SECURITY_NODE_IPV6] = {
+ .name = "IPV6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = ixgbe_validate_security_ipv6,
+ .process = ixgbe_process_security_ipv6,
+ },
+ [IXGBE_SECURITY_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [IXGBE_SECURITY_NODE_START] = {
+ .next = (const size_t[]) {
+ IXGBE_SECURITY_NODE_IPV4,
+ IXGBE_SECURITY_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_SECURITY_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ IXGBE_SECURITY_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_SECURITY_NODE_IPV6] = {
+ .next = (const size_t[]) {
+ IXGBE_SECURITY_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+static int
+ixgbe_flow_security_ctx_parse(const struct rte_flow_action *actions,
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_security_ctx *sec_ctx = (struct ixgbe_security_ctx *)ctx;
+ struct ci_flow_actions parsed_actions;
+ struct ci_flow_actions_check_param ap_param = {
+ .allowed_types = (const enum rte_flow_action_type[]){
+ /* only security is allowed here */
+ RTE_FLOW_ACTION_TYPE_SECURITY,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .max_actions = 1,
+ };
+ const struct rte_flow_action_security *security;
+ struct rte_security_session *session;
+ const struct ixgbe_crypto_session *ic_session;
+ int ret;
+
+ /* validate attributes */
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret)
+ return ret;
+
+ /* parse requested actions */
+ ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
+ if (ret)
+ return ret;
+
+ security = (const struct rte_flow_action_security *)parsed_actions.actions[0]->conf;
+
+ if (security->security_session == NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, &parsed_actions.actions[0],
+ "NULL security session");
+ }
+
+ /* cast away constness since we need to store the session pointer in the context */
+ session = RTE_CAST_PTR(struct rte_security_session *, security->security_session);
+
+ /* verify that the session is of a correct type */
+ ic_session = SECURITY_GET_SESS_PRIV(session);
+ if (ic_session->dev != ctx->dev) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, &parsed_actions.actions[0],
+ "Security session was created for a different device");
+ }
+ if (ic_session->op != IXGBE_OP_AUTHENTICATED_DECRYPTION) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, &parsed_actions.actions[0],
+ "Only authenticated decryption is supported");
+ }
+ sec_ctx->security.session = session;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_security_ctx_to_flow(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct ixgbe_security_ctx *security_ctx = (const struct ixgbe_security_ctx *)ctx;
+ struct ixgbe_security_flow *security_flow = (struct ixgbe_security_flow *)flow;
+
+ security_flow->security = security_ctx->security;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_security_flow_install(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_security_flow *security_flow = (struct ixgbe_security_flow *)flow;
+ struct ixgbe_security_filter *filter = &security_flow->security;
+ int ret;
+
+ ret = ixgbe_crypto_add_ingress_sa_from_flow(filter->session, &filter->spec);
+ if (ret) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to add ingress SA from flow");;
+ }
+ return 0;
+}
+
+static bool
+ixgbe_flow_security_is_available(const struct ci_flow_engine *engine __rte_unused,
+ const struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ return hw->mac.type == ixgbe_mac_82599EB ||
+ hw->mac.type == ixgbe_mac_X540 ||
+ hw->mac.type == ixgbe_mac_X550 ||
+ hw->mac.type == ixgbe_mac_X550EM_x ||
+ hw->mac.type == ixgbe_mac_X550EM_a ||
+ hw->mac.type == ixgbe_mac_E610;
+}
+
+const struct ci_flow_engine_ops ixgbe_security_ops = {
+ .is_available = ixgbe_flow_security_is_available,
+ .ctx_parse = ixgbe_flow_security_ctx_parse,
+ .ctx_to_flow = ixgbe_flow_security_ctx_to_flow,
+ .flow_install = ixgbe_flow_security_flow_install,
+ /* uninstall is not handled in rte_flow */
+};
+
+const struct ci_flow_engine ixgbe_security_flow_engine = {
+ .name = "ixgbe_security",
+ .ctx_size = sizeof(struct ixgbe_security_ctx),
+ .flow_size = sizeof(struct ixgbe_security_flow),
+ .type = IXGBE_FLOW_ENGINE_TYPE_SECURITY,
+ .ops = &ixgbe_security_ops,
+ .graph = &ixgbe_security_graph,
+};
diff --git a/drivers/net/intel/ixgbe/meson.build b/drivers/net/intel/ixgbe/meson.build
index f3052daf4f..65ffe19939 100644
--- a/drivers/net/intel/ixgbe/meson.build
+++ b/drivers/net/intel/ixgbe/meson.build
@@ -15,6 +15,7 @@ sources += files(
'ixgbe_flow_syn.c',
'ixgbe_flow_l2tun.c',
'ixgbe_flow_ntuple.c',
+ 'ixgbe_flow_security.c',
'ixgbe_ipsec.c',
'ixgbe_pf.c',
'ixgbe_rxtx.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 10/21] net/ixgbe: reimplement FDIR parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (8 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 09/21] net/ixgbe: reimplement security parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 11/21] net/ixgbe: reimplement hash parser Anatoly Burakov
` (11 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Use the new flow graph API and the common parsing framework to implement
flow parser for flow director.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 1 +
drivers/net/intel/ixgbe/ixgbe_fdir.c | 13 +-
drivers/net/intel/ixgbe/ixgbe_flow.c | 1542 +--------------------
drivers/net/intel/ixgbe/ixgbe_flow.h | 4 +
drivers/net/intel/ixgbe/ixgbe_flow_fdir.c | 1510 ++++++++++++++++++++
drivers/net/intel/ixgbe/meson.build | 1 +
6 files changed, 1526 insertions(+), 1545 deletions(-)
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_fdir.c
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index ccfe23c233..17b9fa918f 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -50,6 +50,7 @@
#define IXGBE_VMDQ_DCB_NB_QUEUES IXGBE_MAX_RX_QUEUE_NUM
#define IXGBE_DCB_NB_QUEUES IXGBE_MAX_RX_QUEUE_NUM
#define IXGBE_NONE_MODE_TX_NB_QUEUES 64
+#define IXGBE_MAX_FLX_SOURCE_OFF 62
#ifndef NBBY
#define NBBY 8 /* number of bits in a byte */
diff --git a/drivers/net/intel/ixgbe/ixgbe_fdir.c b/drivers/net/intel/ixgbe/ixgbe_fdir.c
index 0bdfbd411a..2556b4fb3e 100644
--- a/drivers/net/intel/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/intel/ixgbe/ixgbe_fdir.c
@@ -36,7 +36,6 @@
#define SIG_BUCKET_256KB_HASH_MASK 0x7FFF /* 15 bits */
#define IXGBE_DEFAULT_FLEXBYTES_OFFSET 12 /* default flexbytes offset in bytes */
#define IXGBE_FDIR_MAX_FLEX_LEN 2 /* len in bytes of flexbytes */
-#define IXGBE_MAX_FLX_SOURCE_OFF 62
#define IXGBE_FDIRCTRL_FLEX_MASK (0x1F << IXGBE_FDIRCTRL_FLEX_SHIFT)
#define IXGBE_FDIRCMD_CMD_INTERVAL_US 10
@@ -635,10 +634,11 @@ int
ixgbe_fdir_configure(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct rte_eth_fdir_conf *global_fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
int err;
uint32_t fdirctrl, pbsize;
int i;
- enum rte_fdir_mode mode = IXGBE_DEV_FDIR_CONF(dev)->mode;
+ enum rte_fdir_mode mode = global_fdir_conf->mode;
PMD_INIT_FUNC_TRACE();
@@ -659,7 +659,10 @@ ixgbe_fdir_configure(struct rte_eth_dev *dev)
mode != RTE_FDIR_MODE_PERFECT)
return -ENOSYS;
- err = configure_fdir_flags(IXGBE_DEV_FDIR_CONF(dev), &fdirctrl);
+ /* drop queue is always fixed */
+ global_fdir_conf->drop_queue = IXGBE_FDIR_DROP_QUEUE;
+
+ err = configure_fdir_flags(global_fdir_conf, &fdirctrl);
if (err)
return err;
@@ -681,12 +684,12 @@ ixgbe_fdir_configure(struct rte_eth_dev *dev)
for (i = 1; i < 8; i++)
IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
- err = fdir_set_input_mask(dev, &IXGBE_DEV_FDIR_CONF(dev)->mask);
+ err = fdir_set_input_mask(dev, &global_fdir_conf->mask);
if (err < 0) {
PMD_INIT_LOG(ERR, " Error on setting FD mask");
return err;
}
- err = ixgbe_set_fdir_flex_conf(dev, &IXGBE_DEV_FDIR_CONF(dev)->flex_conf,
+ err = ixgbe_set_fdir_flex_conf(dev, &global_fdir_conf->flex_conf,
&fdirctrl);
if (err < 0) {
PMD_INIT_LOG(ERR, " Error on setting FD flexible arguments.");
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index 74ddc699fa..ea32025079 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -48,13 +48,6 @@
#include "../common/flow_engine.h"
#include "ixgbe_flow.h"
-#define IXGBE_MAX_FLX_SOURCE_OFF 62
-
-/* fdir filter list structure */
-struct ixgbe_fdir_rule_ele {
- TAILQ_ENTRY(ixgbe_fdir_rule_ele) entries;
- struct ixgbe_fdir_rule filter_info;
-};
/* rss filter list structure */
struct ixgbe_rss_conf_ele {
TAILQ_ENTRY(ixgbe_rss_conf_ele) entries;
@@ -66,11 +59,9 @@ struct ixgbe_flow_mem {
struct rte_flow *flow;
};
-TAILQ_HEAD(ixgbe_fdir_rule_filter_list, ixgbe_fdir_rule_ele);
TAILQ_HEAD(ixgbe_rss_filter_list, ixgbe_rss_conf_ele);
TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
-static struct ixgbe_fdir_rule_filter_list filter_fdir_list;
static struct ixgbe_rss_filter_list filter_rss_list;
static struct ixgbe_flow_mem_list ixgbe_flow_list;
@@ -81,28 +72,10 @@ const struct ci_flow_engine_list ixgbe_flow_engine_list = {
&ixgbe_l2_tunnel_flow_engine,
&ixgbe_ntuple_flow_engine,
&ixgbe_security_flow_engine,
+ &ixgbe_fdir_flow_engine,
+ &ixgbe_fdir_tunnel_flow_engine,
},
};
-
-/**
- * Endless loop will never happen with below assumption
- * 1. there is at least one no-void item(END)
- * 2. cur is before END.
- */
-static inline
-const struct rte_flow_item *next_no_void_pattern(
- const struct rte_flow_item pattern[],
- const struct rte_flow_item *cur)
-{
- const struct rte_flow_item *next =
- cur ? cur + 1 : &pattern[0];
- while (1) {
- if (next->type != RTE_FLOW_ITEM_TYPE_VOID)
- return next;
- next++;
- }
-}
-
/*
* All ixgbe engines mostly check the same stuff, so use a common check.
*/
@@ -158,1403 +131,6 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions,
* normally the packets should use network order.
*/
-/* search next no void pattern and skip fuzzy */
-static inline
-const struct rte_flow_item *next_no_fuzzy_pattern(
- const struct rte_flow_item pattern[],
- const struct rte_flow_item *cur)
-{
- const struct rte_flow_item *next =
- next_no_void_pattern(pattern, cur);
- while (1) {
- if (next->type != RTE_FLOW_ITEM_TYPE_FUZZY)
- return next;
- next = next_no_void_pattern(pattern, next);
- }
-}
-
-static inline uint8_t signature_match(const struct rte_flow_item pattern[])
-{
- const struct rte_flow_item_fuzzy *spec, *last, *mask;
- const struct rte_flow_item *item;
- uint32_t sh, lh, mh;
- int i = 0;
-
- while (1) {
- item = pattern + i;
- if (item->type == RTE_FLOW_ITEM_TYPE_END)
- break;
-
- if (item->type == RTE_FLOW_ITEM_TYPE_FUZZY) {
- spec = item->spec;
- last = item->last;
- mask = item->mask;
-
- if (!spec || !mask)
- return 0;
-
- sh = spec->thresh;
-
- if (!last)
- lh = sh;
- else
- lh = last->thresh;
-
- mh = mask->thresh;
- sh = sh & mh;
- lh = lh & mh;
-
- if (!sh || sh > lh)
- return 0;
-
- return 1;
- }
-
- i++;
- }
-
- return 0;
-}
-
-/**
- * Parse the rule to see if it is a IP or MAC VLAN flow director rule.
- * And get the flow director filter info BTW.
- * UDP/TCP/SCTP PATTERN:
- * The first not void item can be ETH or IPV4 or IPV6
- * The second not void item must be IPV4 or IPV6 if the first one is ETH.
- * The next not void item could be UDP or TCP or SCTP (optional)
- * The next not void item could be RAW (for flexbyte, optional)
- * The next not void item must be END.
- * A Fuzzy Match pattern can appear at any place before END.
- * Fuzzy Match is optional for IPV4 but is required for IPV6
- * MAC VLAN PATTERN:
- * The first not void item must be ETH.
- * The second not void item must be MAC VLAN.
- * The next not void item must be END.
- * ACTION:
- * The first not void action should be QUEUE or DROP.
- * The second not void optional action should be MARK,
- * mark_id is a uint32_t number.
- * The next not void action should be END.
- * UDP/TCP/SCTP pattern example:
- * ITEM Spec Mask
- * ETH NULL NULL
- * IPV4 src_addr 192.168.1.20 0xFFFFFFFF
- * dst_addr 192.167.3.50 0xFFFFFFFF
- * UDP/TCP/SCTP src_port 80 0xFFFF
- * dst_port 80 0xFFFF
- * FLEX relative 0 0x1
- * search 0 0x1
- * reserved 0 0
- * offset 12 0xFFFFFFFF
- * limit 0 0xFFFF
- * length 2 0xFFFF
- * pattern[0] 0x86 0xFF
- * pattern[1] 0xDD 0xFF
- * END
- * MAC VLAN pattern example:
- * ITEM Spec Mask
- * ETH dst_addr
- {0xAC, 0x7B, 0xA1, {0xFF, 0xFF, 0xFF,
- 0x2C, 0x6D, 0x36} 0xFF, 0xFF, 0xFF}
- * MAC VLAN tci 0x2016 0xEFFF
- * END
- * Other members in mask and spec should set to 0x00.
- * Item->last should be NULL.
- */
-static int
-ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct ci_flow_actions *parsed_actions,
- struct ixgbe_fdir_rule *rule,
- struct rte_flow_error *error)
-{
- const struct rte_flow_item *item;
- const struct rte_flow_item_eth *eth_spec;
- const struct rte_flow_item_eth *eth_mask;
- const struct rte_flow_item_ipv4 *ipv4_spec;
- const struct rte_flow_item_ipv4 *ipv4_mask;
- const struct rte_flow_item_ipv6 *ipv6_spec;
- const struct rte_flow_item_ipv6 *ipv6_mask;
- const struct rte_flow_item_tcp *tcp_spec;
- const struct rte_flow_item_tcp *tcp_mask;
- const struct rte_flow_item_udp *udp_spec;
- const struct rte_flow_item_udp *udp_mask;
- const struct rte_flow_item_sctp *sctp_spec;
- const struct rte_flow_item_sctp *sctp_mask;
- const struct rte_flow_item_vlan *vlan_spec;
- const struct rte_flow_item_vlan *vlan_mask;
- const struct rte_flow_item_raw *raw_mask;
- const struct rte_flow_item_raw *raw_spec;
- const struct rte_flow_action *fwd_action, *aux_action;
- struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- uint8_t j;
-
- fwd_action = parsed_actions->actions[0];
- /* can be NULL */
- aux_action = parsed_actions->actions[1];
-
- /* check if this is a signature match */
- if (signature_match(pattern))
- rule->mode = RTE_FDIR_MODE_SIGNATURE;
- else
- rule->mode = RTE_FDIR_MODE_PERFECT;
-
- /* set up action */
- if (fwd_action->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
- const struct rte_flow_action_queue *q_act = fwd_action->conf;
- rule->queue = q_act->index;
- } else {
- /* signature mode does not support drop action. */
- if (rule->mode == RTE_FDIR_MODE_SIGNATURE) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, fwd_action,
- "Signature mode does not support drop action.");
- return -rte_errno;
- }
- rule->fdirflags = IXGBE_FDIRCMD_DROP;
- }
-
- /* set up mark action */
- if (aux_action != NULL && aux_action->type == RTE_FLOW_ACTION_TYPE_MARK) {
- const struct rte_flow_action_mark *m_act = aux_action->conf;
- rule->soft_id = m_act->id;
- }
-
- /**
- * Some fields may not be provided. Set spec to 0 and mask to default
- * value. So, we need not do anything for the not provided fields later.
- */
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
- rule->mask.vlan_tci_mask = 0;
- rule->mask.flex_bytes_mask = 0;
- rule->mask.dst_port_mask = 0;
- rule->mask.src_port_mask = 0;
-
- /**
- * The first not void item should be
- * MAC or IPv4 or TCP or UDP or SCTP.
- */
- item = next_no_fuzzy_pattern(pattern, NULL);
- if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
- item->type != RTE_FLOW_ITEM_TYPE_TCP &&
- item->type != RTE_FLOW_ITEM_TYPE_UDP &&
- item->type != RTE_FLOW_ITEM_TYPE_SCTP) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- /* Get the MAC info. */
- if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
- /**
- * Only support vlan and dst MAC address,
- * others should be masked.
- */
- if (item->spec && !item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- if (item->spec) {
- rule->b_spec = TRUE;
- eth_spec = item->spec;
-
- /* Get the dst MAC. */
- for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
- rule->ixgbe_fdir.formatted.inner_mac[j] =
- eth_spec->hdr.dst_addr.addr_bytes[j];
- }
- }
-
-
- if (item->mask) {
-
- rule->b_mask = TRUE;
- eth_mask = item->mask;
-
- /* Ether type should be masked. */
- if (eth_mask->hdr.ether_type ||
- rule->mode == RTE_FDIR_MODE_SIGNATURE) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- /* If ethernet has meaning, it means MAC VLAN mode. */
- rule->mode = RTE_FDIR_MODE_PERFECT_MAC_VLAN;
-
- /**
- * src MAC address must be masked,
- * and don't support dst MAC address mask.
- */
- for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
- if (eth_mask->hdr.src_addr.addr_bytes[j] ||
- eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) {
- memset(rule, 0,
- sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /* When no VLAN, considered as full mask. */
- rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
- }
- /*** If both spec and mask are item,
- * it means don't care about ETH.
- * Do nothing.
- */
-
- /**
- * Check if the next not void item is vlan or ipv4.
- * IPv6 is not supported.
- */
- item = next_no_fuzzy_pattern(pattern, item);
- if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
- if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- } else {
- if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
- }
-
- if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- if (!(item->spec && item->mask)) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- vlan_spec = item->spec;
- vlan_mask = item->mask;
-
- rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
-
- rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
- rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
- /* More than one tags are not supported. */
-
- /* Next not void item must be END */
- item = next_no_fuzzy_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_END) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /* Get the IPV4 info. */
- if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
- /**
- * Set the flow type even if there's no content
- * as we must have a flow type.
- */
- rule->ixgbe_fdir.formatted.flow_type =
- IXGBE_ATR_FLOW_TYPE_IPV4;
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
- /**
- * Only care about src & dst addresses,
- * others should be masked.
- */
- if (!item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- rule->b_mask = TRUE;
- ipv4_mask = item->mask;
- if (ipv4_mask->hdr.version_ihl ||
- ipv4_mask->hdr.type_of_service ||
- ipv4_mask->hdr.total_length ||
- ipv4_mask->hdr.packet_id ||
- ipv4_mask->hdr.fragment_offset ||
- ipv4_mask->hdr.time_to_live ||
- ipv4_mask->hdr.next_proto_id ||
- ipv4_mask->hdr.hdr_checksum) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
- rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
-
- if (item->spec) {
- rule->b_spec = TRUE;
- ipv4_spec = item->spec;
- rule->ixgbe_fdir.formatted.dst_ip[0] =
- ipv4_spec->hdr.dst_addr;
- rule->ixgbe_fdir.formatted.src_ip[0] =
- ipv4_spec->hdr.src_addr;
- }
-
- /**
- * Check if the next not void item is
- * TCP or UDP or SCTP or END.
- */
- item = next_no_fuzzy_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
- item->type != RTE_FLOW_ITEM_TYPE_UDP &&
- item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
- item->type != RTE_FLOW_ITEM_TYPE_END &&
- item->type != RTE_FLOW_ITEM_TYPE_RAW) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /* Get the IPV6 info. */
- if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
- /**
- * Set the flow type even if there's no content
- * as we must have a flow type.
- */
- rule->ixgbe_fdir.formatted.flow_type =
- IXGBE_ATR_FLOW_TYPE_IPV6;
-
- /**
- * 1. must signature match
- * 2. not support last
- * 3. mask must not null
- */
- if (rule->mode != RTE_FDIR_MODE_SIGNATURE ||
- item->last ||
- !item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- rule->b_mask = TRUE;
- ipv6_mask = item->mask;
- if (ipv6_mask->hdr.vtc_flow ||
- ipv6_mask->hdr.payload_len ||
- ipv6_mask->hdr.proto ||
- ipv6_mask->hdr.hop_limits) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- /* check src addr mask */
- for (j = 0; j < 16; j++) {
- if (ipv6_mask->hdr.src_addr.a[j] == 0) {
- rule->mask.src_ipv6_mask &= ~(1 << j);
- } else if (ipv6_mask->hdr.src_addr.a[j] != UINT8_MAX) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /* check dst addr mask */
- for (j = 0; j < 16; j++) {
- if (ipv6_mask->hdr.dst_addr.a[j] == 0) {
- rule->mask.dst_ipv6_mask &= ~(1 << j);
- } else if (ipv6_mask->hdr.dst_addr.a[j] != UINT8_MAX) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- if (item->spec) {
- rule->b_spec = TRUE;
- ipv6_spec = item->spec;
- rte_memcpy(rule->ixgbe_fdir.formatted.src_ip,
- &ipv6_spec->hdr.src_addr, 16);
- rte_memcpy(rule->ixgbe_fdir.formatted.dst_ip,
- &ipv6_spec->hdr.dst_addr, 16);
- }
-
- /**
- * Check if the next not void item is
- * TCP or UDP or SCTP or END.
- */
- item = next_no_fuzzy_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
- item->type != RTE_FLOW_ITEM_TYPE_UDP &&
- item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
- item->type != RTE_FLOW_ITEM_TYPE_END &&
- item->type != RTE_FLOW_ITEM_TYPE_RAW) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /* Get the TCP info. */
- if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
- /**
- * Set the flow type even if there's no content
- * as we must have a flow type.
- */
- rule->ixgbe_fdir.formatted.flow_type |=
- IXGBE_ATR_L4TYPE_TCP;
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
- /**
- * Only care about src & dst ports,
- * others should be masked.
- */
- if (!item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- rule->b_mask = TRUE;
- tcp_mask = item->mask;
- if (tcp_mask->hdr.sent_seq ||
- tcp_mask->hdr.recv_ack ||
- tcp_mask->hdr.data_off ||
- tcp_mask->hdr.tcp_flags ||
- tcp_mask->hdr.rx_win ||
- tcp_mask->hdr.cksum ||
- tcp_mask->hdr.tcp_urp) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- rule->mask.src_port_mask = tcp_mask->hdr.src_port;
- rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
-
- if (item->spec) {
- rule->b_spec = TRUE;
- tcp_spec = item->spec;
- rule->ixgbe_fdir.formatted.src_port =
- tcp_spec->hdr.src_port;
- rule->ixgbe_fdir.formatted.dst_port =
- tcp_spec->hdr.dst_port;
- }
-
- item = next_no_fuzzy_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW &&
- item->type != RTE_FLOW_ITEM_TYPE_END) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- }
-
- /* Get the UDP info */
- if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
- /**
- * Set the flow type even if there's no content
- * as we must have a flow type.
- */
- rule->ixgbe_fdir.formatted.flow_type |=
- IXGBE_ATR_L4TYPE_UDP;
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
- /**
- * Only care about src & dst ports,
- * others should be masked.
- */
- if (!item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- rule->b_mask = TRUE;
- udp_mask = item->mask;
- if (udp_mask->hdr.dgram_len ||
- udp_mask->hdr.dgram_cksum) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- rule->mask.src_port_mask = udp_mask->hdr.src_port;
- rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
-
- if (item->spec) {
- rule->b_spec = TRUE;
- udp_spec = item->spec;
- rule->ixgbe_fdir.formatted.src_port =
- udp_spec->hdr.src_port;
- rule->ixgbe_fdir.formatted.dst_port =
- udp_spec->hdr.dst_port;
- }
-
- item = next_no_fuzzy_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW &&
- item->type != RTE_FLOW_ITEM_TYPE_END) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- }
-
- /* Get the SCTP info */
- if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) {
- /**
- * Set the flow type even if there's no content
- * as we must have a flow type.
- */
- rule->ixgbe_fdir.formatted.flow_type |=
- IXGBE_ATR_L4TYPE_SCTP;
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- /* only some mac types support sctp port */
- if (hw->mac.type == ixgbe_mac_X550 ||
- hw->mac.type == ixgbe_mac_X550EM_x ||
- hw->mac.type == ixgbe_mac_X550EM_a ||
- hw->mac.type == ixgbe_mac_E610) {
- /**
- * Only care about src & dst ports,
- * others should be masked.
- */
- if (!item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- rule->b_mask = TRUE;
- sctp_mask = item->mask;
- if (sctp_mask->hdr.tag ||
- sctp_mask->hdr.cksum) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- rule->mask.src_port_mask = sctp_mask->hdr.src_port;
- rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
-
- if (item->spec) {
- rule->b_spec = TRUE;
- sctp_spec = item->spec;
- rule->ixgbe_fdir.formatted.src_port =
- sctp_spec->hdr.src_port;
- rule->ixgbe_fdir.formatted.dst_port =
- sctp_spec->hdr.dst_port;
- }
- /* others even sctp port is not supported */
- } else {
- sctp_mask = item->mask;
- if (sctp_mask &&
- (sctp_mask->hdr.src_port ||
- sctp_mask->hdr.dst_port ||
- sctp_mask->hdr.tag ||
- sctp_mask->hdr.cksum)) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- item = next_no_fuzzy_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW &&
- item->type != RTE_FLOW_ITEM_TYPE_END) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /* Get the flex byte info */
- if (item->type == RTE_FLOW_ITEM_TYPE_RAW) {
- /* Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
- /* mask should not be null */
- if (!item->mask || !item->spec) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- raw_mask = item->mask;
-
- /* check mask */
- if (raw_mask->relative != 0x1 ||
- raw_mask->search != 0x1 ||
- raw_mask->reserved != 0x0 ||
- (uint32_t)raw_mask->offset != 0xffffffff ||
- raw_mask->limit != 0xffff ||
- raw_mask->length != 0xffff) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- raw_spec = item->spec;
-
- /* check spec */
- if (raw_spec->relative != 0 ||
- raw_spec->search != 0 ||
- raw_spec->reserved != 0 ||
- raw_spec->offset > IXGBE_MAX_FLX_SOURCE_OFF ||
- raw_spec->offset % 2 ||
- raw_spec->limit != 0 ||
- raw_spec->length != 2 ||
- /* pattern can't be 0xffff */
- (raw_spec->pattern[0] == 0xff &&
- raw_spec->pattern[1] == 0xff)) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- /* check pattern mask */
- if (raw_mask->pattern[0] != 0xff ||
- raw_mask->pattern[1] != 0xff) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- rule->mask.flex_bytes_mask = 0xffff;
- rule->ixgbe_fdir.formatted.flex_bytes =
- (((uint16_t)raw_spec->pattern[1]) << 8) |
- raw_spec->pattern[0];
- rule->flex_bytes_offset = raw_spec->offset;
- }
-
- if (item->type != RTE_FLOW_ITEM_TYPE_END) {
- /* check if the next not void item is END */
- item = next_no_fuzzy_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_END) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- return 0;
-}
-
-#define NVGRE_PROTOCOL 0x6558
-
-/**
- * Parse the rule to see if it is a VxLAN or NVGRE flow director rule.
- * And get the flow director filter info BTW.
- * VxLAN PATTERN:
- * The first not void item must be ETH.
- * The second not void item must be IPV4/ IPV6.
- * The third not void item must be NVGRE.
- * The next not void item must be END.
- * NVGRE PATTERN:
- * The first not void item must be ETH.
- * The second not void item must be IPV4/ IPV6.
- * The third not void item must be NVGRE.
- * The next not void item must be END.
- * ACTION:
- * The first not void action should be QUEUE or DROP.
- * The second not void optional action should be MARK,
- * mark_id is a uint32_t number.
- * The next not void action should be END.
- * VxLAN pattern example:
- * ITEM Spec Mask
- * ETH NULL NULL
- * IPV4/IPV6 NULL NULL
- * UDP NULL NULL
- * VxLAN vni{0x00, 0x32, 0x54} {0xFF, 0xFF, 0xFF}
- * MAC VLAN tci 0x2016 0xEFFF
- * END
- * NEGRV pattern example:
- * ITEM Spec Mask
- * ETH NULL NULL
- * IPV4/IPV6 NULL NULL
- * NVGRE protocol 0x6558 0xFFFF
- * tni{0x00, 0x32, 0x54} {0xFF, 0xFF, 0xFF}
- * MAC VLAN tci 0x2016 0xEFFF
- * END
- * other members in mask and spec should set to 0x00.
- * item->last should be NULL.
- */
-static int
-ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_item pattern[],
- const struct ci_flow_actions *parsed_actions,
- struct ixgbe_fdir_rule *rule,
- struct rte_flow_error *error)
-{
- const struct rte_flow_item *item;
- const struct rte_flow_item_vxlan *vxlan_spec;
- const struct rte_flow_item_vxlan *vxlan_mask;
- const struct rte_flow_item_nvgre *nvgre_spec;
- const struct rte_flow_item_nvgre *nvgre_mask;
- const struct rte_flow_item_eth *eth_spec;
- const struct rte_flow_item_eth *eth_mask;
- const struct rte_flow_item_vlan *vlan_spec;
- const struct rte_flow_item_vlan *vlan_mask;
- const struct rte_flow_action *fwd_action, *aux_action;
- uint32_t j;
-
- fwd_action = parsed_actions->actions[0];
- /* can be NULL */
- aux_action = parsed_actions->actions[1];
-
- /* set up queue/drop action */
- if (fwd_action->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
- const struct rte_flow_action_queue *q_act = fwd_action->conf;
- rule->queue = q_act->index;
- } else {
- rule->fdirflags = IXGBE_FDIRCMD_DROP;
- }
-
- /* set up mark action */
- if (aux_action != NULL && aux_action->type == RTE_FLOW_ACTION_TYPE_MARK) {
- const struct rte_flow_action_mark *mark = aux_action->conf;
- rule->soft_id = mark->id;
- }
-
- /**
- * Some fields may not be provided. Set spec to 0 and mask to default
- * value. So, we need not do anything for the not provided fields later.
- */
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
- rule->mask.vlan_tci_mask = 0;
-
- /**
- * The first not void item should be
- * MAC or IPv4 or IPv6 or UDP or VxLAN.
- */
- item = next_no_void_pattern(pattern, NULL);
- if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
- item->type != RTE_FLOW_ITEM_TYPE_UDP &&
- item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
- item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- rule->mode = RTE_FDIR_MODE_PERFECT_TUNNEL;
-
- /* Skip MAC. */
- if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
- /* Only used to describe the protocol stack. */
- if (item->spec || item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /* Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- /* Check if the next not void item is IPv4 or IPv6. */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
- item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /* Skip IP. */
- if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
- item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
- /* Only used to describe the protocol stack. */
- if (item->spec || item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- /* Check if the next not void item is UDP or NVGRE. */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_UDP &&
- item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /* Skip UDP. */
- if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
- /* Only used to describe the protocol stack. */
- if (item->spec || item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- /* Check if the next not void item is VxLAN. */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /* Get the VxLAN info */
- if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) {
- rule->ixgbe_fdir.formatted.tunnel_type =
- IXGBE_FDIR_VXLAN_TUNNEL_TYPE;
-
- /* Only care about VNI, others should be masked. */
- if (!item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
- rule->b_mask = TRUE;
-
- /* Tunnel type is always meaningful. */
- rule->mask.tunnel_type_mask = 1;
-
- vxlan_mask = item->mask;
- if (vxlan_mask->hdr.flags) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /* VNI must be totally masked or not. */
- if ((vxlan_mask->hdr.vni[0] || vxlan_mask->hdr.vni[1] ||
- vxlan_mask->hdr.vni[2]) &&
- ((vxlan_mask->hdr.vni[0] != 0xFF) ||
- (vxlan_mask->hdr.vni[1] != 0xFF) ||
- (vxlan_mask->hdr.vni[2] != 0xFF))) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->hdr.vni,
- RTE_DIM(vxlan_mask->hdr.vni));
-
- if (item->spec) {
- rule->b_spec = TRUE;
- vxlan_spec = item->spec;
- rte_memcpy(((uint8_t *)
- &rule->ixgbe_fdir.formatted.tni_vni),
- vxlan_spec->hdr.vni, RTE_DIM(vxlan_spec->hdr.vni));
- }
- }
-
- /* Get the NVGRE info */
- if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) {
- rule->ixgbe_fdir.formatted.tunnel_type =
- IXGBE_FDIR_NVGRE_TUNNEL_TYPE;
-
- /**
- * Only care about flags0, flags1, protocol and TNI,
- * others should be masked.
- */
- if (!item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
- rule->b_mask = TRUE;
-
- /* Tunnel type is always meaningful. */
- rule->mask.tunnel_type_mask = 1;
-
- nvgre_mask = item->mask;
- if (nvgre_mask->flow_id) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- if (nvgre_mask->protocol &&
- nvgre_mask->protocol != 0xFFFF) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- if (nvgre_mask->c_k_s_rsvd0_ver &&
- nvgre_mask->c_k_s_rsvd0_ver !=
- rte_cpu_to_be_16(0xFFFF)) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /* TNI must be totally masked or not. */
- if (nvgre_mask->tni[0] &&
- ((nvgre_mask->tni[0] != 0xFF) ||
- (nvgre_mask->tni[1] != 0xFF) ||
- (nvgre_mask->tni[2] != 0xFF))) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /* tni is a 24-bits bit field */
- rte_memcpy(&rule->mask.tunnel_id_mask, nvgre_mask->tni,
- RTE_DIM(nvgre_mask->tni));
- rule->mask.tunnel_id_mask <<= 8;
-
- if (item->spec) {
- rule->b_spec = TRUE;
- nvgre_spec = item->spec;
- if (nvgre_spec->c_k_s_rsvd0_ver !=
- rte_cpu_to_be_16(0x2000) &&
- nvgre_mask->c_k_s_rsvd0_ver) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- if (nvgre_mask->protocol &&
- nvgre_spec->protocol !=
- rte_cpu_to_be_16(NVGRE_PROTOCOL)) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /* tni is a 24-bits bit field */
- rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
- nvgre_spec->tni, RTE_DIM(nvgre_spec->tni));
- }
- }
-
- /* check if the next not void item is MAC */
- item = next_no_void_pattern(pattern, item);
- if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- /**
- * Only support vlan and dst MAC address,
- * others should be masked.
- */
-
- if (!item->mask) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
- rule->b_mask = TRUE;
- eth_mask = item->mask;
-
- /* Ether type should be masked. */
- if (eth_mask->hdr.ether_type) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- /* src MAC address should be masked. */
- for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
- if (eth_mask->hdr.src_addr.addr_bytes[j]) {
- memset(rule, 0,
- sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
- rule->mask.mac_addr_byte_mask = 0;
- for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
- /* It's a per byte mask. */
- if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
- rule->mask.mac_addr_byte_mask |= 0x1 << j;
- } else if (eth_mask->hdr.dst_addr.addr_bytes[j]) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /* When no vlan, considered as full mask. */
- rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
-
- if (item->spec) {
- rule->b_spec = TRUE;
- eth_spec = item->spec;
-
- /* Get the dst MAC. */
- for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
- rule->ixgbe_fdir.formatted.inner_mac[j] =
- eth_spec->hdr.dst_addr.addr_bytes[j];
- }
- }
-
- /**
- * Check if the next not void item is vlan or ipv4.
- * IPv6 is not supported.
- */
- item = next_no_void_pattern(pattern, item);
- if ((item->type != RTE_FLOW_ITEM_TYPE_VLAN) &&
- (item->type != RTE_FLOW_ITEM_TYPE_IPV4)) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- /*Not supported last point for range*/
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- item, "Not supported last point for range");
- return -rte_errno;
- }
-
- if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- if (!(item->spec && item->mask)) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
-
- vlan_spec = item->spec;
- vlan_mask = item->mask;
-
- rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
-
- rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
- rule->mask.vlan_tci_mask &= rte_cpu_to_be_16(0xEFFF);
- /* More than one tags are not supported. */
-
- /* check if the next not void item is END */
- item = next_no_void_pattern(pattern, item);
-
- if (item->type != RTE_FLOW_ITEM_TYPE_END) {
- memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Not supported by fdir filter");
- return -rte_errno;
- }
- }
-
- /**
- * If the tags is 0, it means don't care about the VLAN.
- * Do nothing.
- */
-
- return 0;
-}
-
-/*
- * Check flow director actions
- */
-static int
-ixgbe_fdir_actions_check(const struct ci_flow_actions *parsed_actions,
- const struct ci_flow_actions_check_param *param __rte_unused,
- struct rte_flow_error *error)
-{
- const enum rte_flow_action_type fwd_actions[] = {
- RTE_FLOW_ACTION_TYPE_QUEUE,
- RTE_FLOW_ACTION_TYPE_DROP,
- RTE_FLOW_ACTION_TYPE_END
- };
- const struct rte_flow_action *action, *drop_action = NULL;
-
- /* do the generic checks first */
- int ret = ixgbe_flow_actions_check(parsed_actions, param, error);
- if (ret)
- return ret;
-
- /* first action must be a forwarding action */
- action = parsed_actions->actions[0];
- if (!ci_flow_action_type_in_list(action->type, fwd_actions)) {
- return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
- action, "First action must be QUEUE or DROP");
- }
- /* remember if we have a drop action */
- if (action->type == RTE_FLOW_ACTION_TYPE_DROP) {
- drop_action = action;
- }
-
- /* second action, if specified, must not be a forwarding action */
- action = parsed_actions->actions[1];
- if (action != NULL && ci_flow_action_type_in_list(action->type, fwd_actions)) {
- return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
- action, "Conflicting actions");
- }
- /* if we didn't have a drop action before but now we do, remember that */
- if (drop_action == NULL && action != NULL && action->type == RTE_FLOW_ACTION_TYPE_DROP) {
- drop_action = action;
- }
- /* drop must be the only action */
- if (drop_action != NULL && action != NULL) {
- return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
- action, "Conflicting actions");
- }
- return 0;
-}
-
-static int
-ixgbe_parse_fdir_filter(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct ixgbe_fdir_rule *rule,
- struct rte_flow_error *error)
-{
- int ret;
- struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
- struct ci_flow_actions parsed_actions;
- struct ci_flow_actions_check_param ap_param = {
- .allowed_types = (const enum rte_flow_action_type[]){
- /* queue/mark/drop allowed here */
- RTE_FLOW_ACTION_TYPE_QUEUE,
- RTE_FLOW_ACTION_TYPE_DROP,
- RTE_FLOW_ACTION_TYPE_MARK,
- RTE_FLOW_ACTION_TYPE_END
- },
- .driver_ctx = dev,
- .check = ixgbe_fdir_actions_check
- };
-
- if (hw->mac.type != ixgbe_mac_82599EB &&
- hw->mac.type != ixgbe_mac_X540 &&
- hw->mac.type != ixgbe_mac_X550 &&
- hw->mac.type != ixgbe_mac_X550EM_x &&
- hw->mac.type != ixgbe_mac_X550EM_a &&
- hw->mac.type != ixgbe_mac_E610)
- return -ENOTSUP;
-
- /* validate attributes */
- ret = ci_flow_check_attr(attr, NULL, error);
- if (ret)
- return ret;
-
- /* parse requested actions */
- ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
- if (ret)
- return ret;
-
- fdir_conf->drop_queue = IXGBE_FDIR_DROP_QUEUE;
-
- ret = ixgbe_parse_fdir_filter_normal(dev, pattern, &parsed_actions, rule, error);
-
- if (!ret)
- goto step_next;
-
- ret = ixgbe_parse_fdir_filter_tunnel(pattern, &parsed_actions, rule, error);
-
- if (ret)
- return ret;
-
-step_next:
-
- if (hw->mac.type == ixgbe_mac_82599EB &&
- rule->fdirflags == IXGBE_FDIRCMD_DROP &&
- (rule->ixgbe_fdir.formatted.src_port != 0 ||
- rule->ixgbe_fdir.formatted.dst_port != 0))
- return -ENOTSUP;
-
- if (fdir_conf->mode == RTE_FDIR_MODE_NONE) {
- fdir_conf->mode = rule->mode;
- ret = ixgbe_fdir_configure(dev);
- if (ret) {
- fdir_conf->mode = RTE_FDIR_MODE_NONE;
- return ret;
- }
- } else if (fdir_conf->mode != rule->mode) {
- return -ENOTSUP;
- }
-
- if (rule->queue >= dev->data->nb_rx_queues)
- return -ENOTSUP;
-
- return ret;
-}
-
/* Flow actions check specific to RSS filter */
static int
ixgbe_flow_actions_check_rss(const struct ci_flow_actions *parsed_actions,
@@ -1665,7 +241,6 @@ ixgbe_clear_rss_filter(struct rte_eth_dev *dev)
void
ixgbe_filterlist_init(void)
{
- TAILQ_INIT(&filter_fdir_list);
TAILQ_INIT(&filter_rss_list);
TAILQ_INIT(&ixgbe_flow_list);
}
@@ -1673,17 +248,9 @@ ixgbe_filterlist_init(void)
void
ixgbe_filterlist_flush(void)
{
- struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
struct ixgbe_rss_conf_ele *rss_filter_ptr;
- while ((fdir_rule_ptr = TAILQ_FIRST(&filter_fdir_list))) {
- TAILQ_REMOVE(&filter_fdir_list,
- fdir_rule_ptr,
- entries);
- rte_free(fdir_rule_ptr);
- }
-
while ((rss_filter_ptr = TAILQ_FIRST(&filter_rss_list))) {
TAILQ_REMOVE(&filter_rss_list,
rss_filter_ptr,
@@ -1715,15 +282,10 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
{
struct ixgbe_adapter *ad = dev->data->dev_private;
int ret;
- struct ixgbe_fdir_rule fdir_rule;
- struct ixgbe_hw_fdir_info *fdir_info =
- IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
struct ixgbe_rte_flow_rss_conf rss_conf;
struct rte_flow *flow = NULL;
- struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_rss_conf_ele *rss_filter_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
- uint8_t first_mask = FALSE;
/* try the new flow engine first */
flow = ci_flow_create(&ad->flow_engine_conf, &ixgbe_flow_engine_list,
@@ -1750,81 +312,6 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&ixgbe_flow_list,
ixgbe_flow_mem_ptr, entries);
- memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
- ret = ixgbe_parse_fdir_filter(dev, attr, pattern,
- actions, &fdir_rule, error);
- if (!ret) {
- /* A mask cannot be deleted. */
- if (fdir_rule.b_mask) {
- if (!fdir_info->mask_added) {
- /* It's the first time the mask is set. */
- *&fdir_info->mask = *&fdir_rule.mask;
-
- if (fdir_rule.mask.flex_bytes_mask) {
- ret = ixgbe_fdir_set_flexbytes_offset(dev,
- fdir_rule.flex_bytes_offset);
- if (ret)
- goto out;
- }
- ret = ixgbe_fdir_set_input_mask(dev);
- if (ret)
- goto out;
-
- fdir_info->mask_added = TRUE;
- first_mask = TRUE;
- } else {
- /**
- * Only support one global mask,
- * all the masks should be the same.
- */
- ret = memcmp(&fdir_info->mask,
- &fdir_rule.mask,
- sizeof(struct ixgbe_hw_fdir_mask));
- if (ret)
- goto out;
-
- if (fdir_rule.mask.flex_bytes_mask &&
- fdir_info->flex_bytes_offset !=
- fdir_rule.flex_bytes_offset)
- goto out;
- }
- }
-
- if (fdir_rule.b_spec) {
- ret = ixgbe_fdir_filter_program(dev, &fdir_rule,
- FALSE, FALSE);
- if (!ret) {
- fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter",
- sizeof(struct ixgbe_fdir_rule_ele), 0);
- if (!fdir_rule_ptr) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- goto out;
- }
- rte_memcpy(&fdir_rule_ptr->filter_info,
- &fdir_rule,
- sizeof(struct ixgbe_fdir_rule));
- TAILQ_INSERT_TAIL(&filter_fdir_list,
- fdir_rule_ptr, entries);
- flow->rule = fdir_rule_ptr;
- flow->filter_type = RTE_ETH_FILTER_FDIR;
-
- return flow;
- }
-
- if (ret) {
- /**
- * clean the mask_added flag if fail to
- * program
- **/
- if (first_mask)
- fdir_info->mask_added = FALSE;
- goto out;
- }
- }
-
- goto out;
- }
-
memset(&rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf));
ret = ixgbe_parse_rss_filter(dev, attr,
actions, &rss_conf, error);
@@ -1871,7 +358,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct ixgbe_adapter *ad = dev->data->dev_private;
- struct ixgbe_fdir_rule fdir_rule;
struct ixgbe_rte_flow_rss_conf rss_conf;
int ret;
@@ -1883,12 +369,6 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
/* fall back to legacy engines */
- memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
- ret = ixgbe_parse_fdir_filter(dev, attr, pattern,
- actions, &fdir_rule, error);
- if (!ret)
- return 0;
-
memset(&rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf));
ret = ixgbe_parse_rss_filter(dev, attr,
actions, &rss_conf, error);
@@ -1906,11 +386,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
int ret;
struct rte_flow *pmd_flow = flow;
enum rte_filter_type filter_type = pmd_flow->filter_type;
- struct ixgbe_fdir_rule fdir_rule;
- struct ixgbe_fdir_rule_ele *fdir_rule_ptr;
struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
- struct ixgbe_hw_fdir_info *fdir_info =
- IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
struct ixgbe_rss_conf_ele *rss_filter_ptr;
/* try the new flow engine first */
@@ -1923,20 +399,6 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
/* fall back to legacy engines */
switch (filter_type) {
- case RTE_ETH_FILTER_FDIR:
- fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
- rte_memcpy(&fdir_rule,
- &fdir_rule_ptr->filter_info,
- sizeof(struct ixgbe_fdir_rule));
- ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
- if (!ret) {
- TAILQ_REMOVE(&filter_fdir_list,
- fdir_rule_ptr, entries);
- rte_free(fdir_rule_ptr);
- if (TAILQ_EMPTY(&filter_fdir_list))
- fdir_info->mask_added = false;
- }
- break;
case RTE_ETH_FILTER_HASH:
rss_filter_ptr = (struct ixgbe_rss_conf_ele *)
pmd_flow->rule;
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.h b/drivers/net/intel/ixgbe/ixgbe_flow.h
index daff23e227..91ee5106e3 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.h
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.h
@@ -14,6 +14,8 @@ enum ixgbe_flow_engine_type {
IXGBE_FLOW_ENGINE_TYPE_L2_TUNNEL,
IXGBE_FLOW_ENGINE_TYPE_NTUPLE,
IXGBE_FLOW_ENGINE_TYPE_SECURITY,
+ IXGBE_FLOW_ENGINE_TYPE_FDIR,
+ IXGBE_FLOW_ENGINE_TYPE_FDIR_TUNNEL,
};
int
@@ -28,5 +30,7 @@ extern const struct ci_flow_engine ixgbe_syn_flow_engine;
extern const struct ci_flow_engine ixgbe_l2_tunnel_flow_engine;
extern const struct ci_flow_engine ixgbe_ntuple_flow_engine;
extern const struct ci_flow_engine ixgbe_security_flow_engine;
+extern const struct ci_flow_engine ixgbe_fdir_flow_engine;
+extern const struct ci_flow_engine ixgbe_fdir_tunnel_flow_engine;
#endif /* _IXGBE_FLOW_H_ */
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow_fdir.c b/drivers/net/intel/ixgbe/ixgbe_flow_fdir.c
new file mode 100644
index 0000000000..c0c43ddbff
--- /dev/null
+++ b/drivers/net/intel/ixgbe/ixgbe_flow_fdir.c
@@ -0,0 +1,1510 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include <rte_common.h>
+#include <rte_flow.h>
+#include <rte_flow_graph.h>
+#include <rte_ether.h>
+
+#include "ixgbe_ethdev.h"
+#include "ixgbe_flow.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+#include "../common/flow_engine.h"
+
+struct ixgbe_fdir_flow {
+ struct rte_flow flow;
+ struct ixgbe_fdir_rule rule;
+};
+
+struct ixgbe_fdir_ctx {
+ struct ci_flow_engine_ctx base;
+ struct ixgbe_fdir_rule rule;
+ bool supports_sctp_ports;
+ const struct rte_flow_action *fwd_action;
+ const struct rte_flow_action *aux_action;
+};
+
+#define IXGBE_FDIR_VLAN_TCI_MASK rte_cpu_to_be_16(0xEFFF)
+
+/**
+ * FDIR normal graph implementation
+ * Pattern: START -> [ETH] -> (IPv4|IPv6) -> [TCP|UDP|SCTP] -> [RAW] -> END
+ * Pattern: START -> ETH -> VLAN -> END
+ */
+
+enum ixgbe_fdir_normal_node_id {
+ IXGBE_FDIR_NORMAL_NODE_START = RTE_FLOW_NODE_FIRST,
+ /* special node to report fuzzy matches */
+ IXGBE_FDIR_NORMAL_NODE_FUZZY,
+ IXGBE_FDIR_NORMAL_NODE_ETH,
+ IXGBE_FDIR_NORMAL_NODE_VLAN,
+ IXGBE_FDIR_NORMAL_NODE_IPV4,
+ IXGBE_FDIR_NORMAL_NODE_IPV6,
+ IXGBE_FDIR_NORMAL_NODE_TCP,
+ IXGBE_FDIR_NORMAL_NODE_UDP,
+ IXGBE_FDIR_NORMAL_NODE_SCTP,
+ IXGBE_FDIR_NORMAL_NODE_RAW,
+ IXGBE_FDIR_NORMAL_NODE_END,
+ IXGBE_FDIR_NORMAL_NODE_MAX,
+};
+
+static inline uint8_t
+signature_match(const struct rte_flow_item *item)
+{
+ const struct rte_flow_item_fuzzy *spec, *last, *mask;
+ uint32_t sh, lh, mh;
+
+ spec = item->spec;
+ last = item->last;
+ mask = item->mask;
+
+ if (spec == NULL && mask == NULL)
+ return 0;
+
+ sh = spec->thresh;
+
+ if (last == NULL)
+ lh = sh;
+ else
+ lh = last->thresh;
+
+ mh = mask->thresh;
+ sh = sh & mh;
+ lh = lh & mh;
+
+ /*
+ * A fuzzy item selects signature mode only when the masked threshold range
+ * is non-empty. Otherwise this stays a perfect-match rule.
+ */
+ if (!sh || sh > lh)
+ return 0;
+
+ return 1;
+}
+
+static int
+ixgbe_process_fdir_normal_fuzzy(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+
+ fdir_ctx->rule.mode = RTE_FDIR_MODE_PERFECT;
+
+ /* spec and mask are optional */
+ if (item->spec == NULL && item->mask == NULL)
+ return 0;
+
+ if (signature_match(item)) {
+ fdir_ctx->rule.mode = RTE_FDIR_MODE_SIGNATURE;
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_validate_fdir_normal_eth(const void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct ixgbe_fdir_ctx *fdir_ctx = (const struct ixgbe_fdir_ctx *)ctx;
+ const struct rte_flow_item_eth *eth_mask = item->mask;
+
+ if (item->spec == NULL && item->mask == NULL)
+ return 0;
+
+ /* we cannot have ETH item in signature mode */
+ if (fdir_ctx->rule.mode == RTE_FDIR_MODE_SIGNATURE) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "ETH item not supported in signature mode");
+ }
+ /* ethertype isn't supported by FDIR */
+ if (!CI_FIELD_IS_ZERO(ð_mask->hdr.ether_type)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Ethertype filtering not supported");
+ }
+ /* source address mask must be all zeroes */
+ if (!CI_FIELD_IS_ZERO(ð_mask->hdr.src_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Source MAC filtering not supported");
+ }
+ /* destination address mask must be all ones */
+ if (!CI_FIELD_IS_MASKED(ð_mask->hdr.dst_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Destination MAC filtering must be exact match");
+ }
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_normal_eth(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+ const struct rte_flow_item_eth *eth_spec = item->spec;
+ const struct rte_flow_item_eth *eth_mask = item->mask;
+
+
+ if (eth_spec == NULL && eth_mask == NULL)
+ return 0;
+
+ /* copy dst MAC */
+ rule->b_spec = TRUE;
+ memcpy(rule->ixgbe_fdir.formatted.inner_mac, eth_spec->hdr.dst_addr.addr_bytes,
+ RTE_ETHER_ADDR_LEN);
+
+ /* set tunnel type */
+ rule->mode = RTE_FDIR_MODE_PERFECT_MAC_VLAN;
+ /* when no VLAN specified, set full mask */
+ rule->b_mask = TRUE;
+ rule->mask.vlan_tci_mask = IXGBE_FDIR_VLAN_TCI_MASK;
+
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_normal_vlan(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct rte_flow_item_vlan *vlan_spec = item->spec;
+ const struct rte_flow_item_vlan *vlan_mask = item->mask;
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
+
+ rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
+ rule->mask.vlan_tci_mask &= IXGBE_FDIR_VLAN_TCI_MASK;
+
+ return 0;
+}
+
+static int
+ixgbe_validate_fdir_normal_ipv4(const void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ipv4 *ipv4_mask = item->mask;
+ const struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "IPv4 not supported with ETH/VLAN items");
+ }
+
+ if (ipv4_mask->hdr.version_ihl ||
+ ipv4_mask->hdr.type_of_service ||
+ ipv4_mask->hdr.total_length ||
+ ipv4_mask->hdr.packet_id ||
+ ipv4_mask->hdr.fragment_offset ||
+ ipv4_mask->hdr.time_to_live ||
+ ipv4_mask->hdr.next_proto_id ||
+ ipv4_mask->hdr.hdr_checksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Only src/dst addresses supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_normal_ipv4(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_ipv4 *ipv4_spec = item->spec;
+ const struct rte_flow_item_ipv4 *ipv4_mask = item->mask;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ rule->ixgbe_fdir.formatted.flow_type = IXGBE_ATR_FLOW_TYPE_IPV4;
+
+ /* spec may not be present */
+ if (ipv4_spec) {
+ rule->b_spec = TRUE;
+ rule->ixgbe_fdir.formatted.dst_ip[0] = ipv4_spec->hdr.dst_addr;
+ rule->ixgbe_fdir.formatted.src_ip[0] = ipv4_spec->hdr.src_addr;
+ }
+
+ rule->b_mask = TRUE;
+ rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
+ rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
+
+ return 0;
+}
+
+static int
+ixgbe_validate_fdir_normal_ipv6(const void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_ipv6 *ipv6_mask = item->mask;
+ const struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "IPv6 not supported with ETH/VLAN items");
+ }
+
+ if (rule->mode != RTE_FDIR_MODE_SIGNATURE) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "IPv6 only supported in signature mode");
+ }
+
+ ipv6_mask = item->mask;
+
+ if (ipv6_mask->hdr.vtc_flow ||
+ ipv6_mask->hdr.payload_len ||
+ ipv6_mask->hdr.proto ||
+ ipv6_mask->hdr.hop_limits) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Only src/dst addresses supported");
+ }
+
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&ipv6_mask->hdr.src_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Partial src address masks not supported");
+ }
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&ipv6_mask->hdr.dst_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Partial dst address masks not supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_normal_ipv6(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_ipv6 *ipv6_spec = item->spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask = item->mask;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+ uint8_t j;
+
+ rule->ixgbe_fdir.formatted.flow_type = IXGBE_ATR_FLOW_TYPE_IPV6;
+
+ /* spec may not be present */
+ if (ipv6_spec) {
+ rule->b_spec = TRUE;
+ memcpy(rule->ixgbe_fdir.formatted.src_ip, &ipv6_spec->hdr.src_addr,
+ sizeof(struct rte_ipv6_addr));
+ memcpy(rule->ixgbe_fdir.formatted.dst_ip, &ipv6_spec->hdr.dst_addr,
+ sizeof(struct rte_ipv6_addr));
+ }
+
+ rule->b_mask = TRUE;
+ for (j = 0; j < sizeof(struct rte_ipv6_addr); j++) {
+ if (ipv6_mask->hdr.src_addr.a[j] == 0)
+ rule->mask.src_ipv6_mask &= ~(1 << j);
+ if (ipv6_mask->hdr.dst_addr.a[j] == 0)
+ rule->mask.dst_ipv6_mask &= ~(1 << j);
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_validate_fdir_normal_tcp(const void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_tcp *tcp_mask = item->mask;
+ const struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "TCP not supported with ETH/VLAN items");
+ }
+
+ if (tcp_mask->hdr.sent_seq ||
+ tcp_mask->hdr.recv_ack ||
+ tcp_mask->hdr.data_off ||
+ tcp_mask->hdr.tcp_flags ||
+ tcp_mask->hdr.rx_win ||
+ tcp_mask->hdr.cksum ||
+ tcp_mask->hdr.tcp_urp) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Only src/dst ports supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_normal_tcp(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_tcp *tcp_spec = item->spec;
+ const struct rte_flow_item_tcp *tcp_mask = item->mask;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ rule->ixgbe_fdir.formatted.flow_type |= IXGBE_ATR_L4TYPE_TCP;
+
+ /* spec is optional */
+ if (tcp_spec) {
+ rule->b_spec = TRUE;
+ rule->ixgbe_fdir.formatted.src_port = tcp_spec->hdr.src_port;
+ rule->ixgbe_fdir.formatted.dst_port = tcp_spec->hdr.dst_port;
+ }
+
+ rule->b_mask = TRUE;
+ rule->mask.src_port_mask = tcp_mask->hdr.src_port;
+ rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
+
+ return 0;
+}
+
+static int
+ixgbe_validate_fdir_normal_udp(const void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_udp *udp_mask = item->mask;
+ const struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "IPv4 not supported with ETH/VLAN items");
+ }
+
+ if (udp_mask->hdr.dgram_len ||
+ udp_mask->hdr.dgram_cksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Only src/dst ports supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_normal_udp(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_udp *udp_spec = item->spec;
+ const struct rte_flow_item_udp *udp_mask = item->mask;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ rule->ixgbe_fdir.formatted.flow_type |= IXGBE_ATR_L4TYPE_UDP;
+
+ /* spec is optional */
+ if (udp_spec) {
+ rule->b_spec = TRUE;
+ rule->ixgbe_fdir.formatted.src_port = udp_spec->hdr.src_port;
+ rule->ixgbe_fdir.formatted.dst_port = udp_spec->hdr.dst_port;
+ }
+
+ rule->b_mask = TRUE;
+ rule->mask.src_port_mask = udp_mask->hdr.src_port;
+ rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
+
+ return 0;
+}
+
+static int
+ixgbe_validate_fdir_normal_sctp(const void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_sctp *sctp_mask = item->mask;
+ const struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "IPv4 not supported with ETH/VLAN items");
+ }
+
+ /* mask is optional */
+ if (sctp_mask == NULL)
+ return 0;
+
+ /* mask can only be specified for some hardware */
+ if (!fdir_ctx->supports_sctp_ports) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "SCTP mask not supported");
+ }
+
+ /* Tag and checksum not supported */
+ if (sctp_mask->hdr.tag ||
+ sctp_mask->hdr.cksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "SCTP tag/cksum not supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_normal_sctp(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_sctp *sctp_spec = item->spec;
+ const struct rte_flow_item_sctp *sctp_mask = item->mask;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ fdir_ctx->rule.ixgbe_fdir.formatted.flow_type |= IXGBE_ATR_L4TYPE_SCTP;
+
+ /* spec is optional */
+ if (sctp_spec) {
+ rule->b_spec = TRUE;
+ rule->ixgbe_fdir.formatted.src_port = sctp_spec->hdr.src_port;
+ rule->ixgbe_fdir.formatted.dst_port = sctp_spec->hdr.dst_port;
+ }
+
+ /* mask is optional */
+ if (sctp_mask) {
+ rule->b_mask = TRUE;
+ rule->mask.src_port_mask = sctp_mask->hdr.src_port;
+ rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_validate_fdir_normal_raw(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_raw *raw_spec;
+ const struct rte_flow_item_raw *raw_mask;
+
+ raw_mask = item->mask;
+
+ if (raw_mask->relative != 0x1 ||
+ raw_mask->search != 0x1 ||
+ raw_mask->reserved != 0x0 ||
+ (uint32_t)raw_mask->offset != 0xffffffff ||
+ raw_mask->limit != 0xffff ||
+ raw_mask->length != 0xffff) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid RAW mask");
+ }
+
+ raw_spec = item->spec;
+
+ if (raw_spec->relative != 0 ||
+ raw_spec->search != 0 ||
+ raw_spec->reserved != 0 ||
+ raw_spec->offset > IXGBE_MAX_FLX_SOURCE_OFF ||
+ raw_spec->offset % 2 ||
+ raw_spec->limit != 0 ||
+ raw_spec->length != 2 ||
+ (raw_spec->pattern[0] == 0xff &&
+ raw_spec->pattern[1] == 0xff)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid RAW spec");
+ }
+
+ if (raw_mask->pattern[0] != 0xff ||
+ raw_mask->pattern[1] != 0xff) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "RAW pattern must be fully masked");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_normal_raw(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_raw *raw_spec = item->spec;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ rule->b_spec = TRUE;
+ rule->ixgbe_fdir.formatted.flex_bytes =
+ (((uint16_t)raw_spec->pattern[1]) << 8) | raw_spec->pattern[0];
+ rule->flex_bytes_offset = raw_spec->offset;
+
+ rule->b_mask = TRUE;
+ rule->mask.flex_bytes_mask = 0xffff;
+
+ return 0;
+}
+
+const struct rte_flow_graph ixgbe_fdir_normal_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [IXGBE_FDIR_NORMAL_NODE_START] = {
+ .name = "START",
+ },
+ [IXGBE_FDIR_NORMAL_NODE_FUZZY] = {
+ .name = "FUZZY",
+ .type = RTE_FLOW_ITEM_TYPE_FUZZY,
+ .process = ixgbe_process_fdir_normal_fuzzy,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK |
+ RTE_FLOW_NODE_EXPECT_RANGE,
+ },
+ [IXGBE_FDIR_NORMAL_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .validate = ixgbe_validate_fdir_normal_eth,
+ .process = ixgbe_process_fdir_normal_eth,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_NORMAL_NODE_VLAN] = {
+ .name = "VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .process = ixgbe_process_fdir_normal_vlan,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_NORMAL_NODE_IPV4] = {
+ .name = "IPV4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .validate = ixgbe_validate_fdir_normal_ipv4,
+ .process = ixgbe_process_fdir_normal_ipv4,
+ .constraints = RTE_FLOW_NODE_EXPECT_MASK |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_NORMAL_NODE_IPV6] = {
+ .name = "IPV6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .validate = ixgbe_validate_fdir_normal_ipv6,
+ .process = ixgbe_process_fdir_normal_ipv6,
+ .constraints = RTE_FLOW_NODE_EXPECT_MASK |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_NORMAL_NODE_TCP] = {
+ .name = "TCP",
+ .type = RTE_FLOW_ITEM_TYPE_TCP,
+ .validate = ixgbe_validate_fdir_normal_tcp,
+ .process = ixgbe_process_fdir_normal_tcp,
+ .constraints = RTE_FLOW_NODE_EXPECT_MASK |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_NORMAL_NODE_UDP] = {
+ .name = "UDP",
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .validate = ixgbe_validate_fdir_normal_udp,
+ .process = ixgbe_process_fdir_normal_udp,
+ .constraints = RTE_FLOW_NODE_EXPECT_MASK |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_NORMAL_NODE_SCTP] = {
+ .name = "SCTP",
+ .type = RTE_FLOW_ITEM_TYPE_SCTP,
+ .validate = ixgbe_validate_fdir_normal_sctp,
+ .process = ixgbe_process_fdir_normal_sctp,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_MASK |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_NORMAL_NODE_RAW] = {
+ .name = "RAW",
+ .type = RTE_FLOW_ITEM_TYPE_RAW,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = ixgbe_validate_fdir_normal_raw,
+ .process = ixgbe_process_fdir_normal_raw,
+ },
+ [IXGBE_FDIR_NORMAL_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [IXGBE_FDIR_NORMAL_NODE_START] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_NORMAL_NODE_ETH,
+ IXGBE_FDIR_NORMAL_NODE_IPV4,
+ IXGBE_FDIR_NORMAL_NODE_IPV6,
+ IXGBE_FDIR_NORMAL_NODE_TCP,
+ IXGBE_FDIR_NORMAL_NODE_UDP,
+ IXGBE_FDIR_NORMAL_NODE_SCTP,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_NORMAL_NODE_ETH] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_NORMAL_NODE_VLAN,
+ IXGBE_FDIR_NORMAL_NODE_IPV4,
+ IXGBE_FDIR_NORMAL_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_NORMAL_NODE_VLAN] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_NORMAL_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_NORMAL_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_NORMAL_NODE_TCP,
+ IXGBE_FDIR_NORMAL_NODE_UDP,
+ IXGBE_FDIR_NORMAL_NODE_SCTP,
+ IXGBE_FDIR_NORMAL_NODE_RAW,
+ IXGBE_FDIR_NORMAL_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_NORMAL_NODE_IPV6] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_NORMAL_NODE_TCP,
+ IXGBE_FDIR_NORMAL_NODE_UDP,
+ IXGBE_FDIR_NORMAL_NODE_SCTP,
+ IXGBE_FDIR_NORMAL_NODE_RAW,
+ IXGBE_FDIR_NORMAL_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_NORMAL_NODE_TCP] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_NORMAL_NODE_RAW,
+ IXGBE_FDIR_NORMAL_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_NORMAL_NODE_UDP] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_NORMAL_NODE_RAW,
+ IXGBE_FDIR_NORMAL_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_NORMAL_NODE_SCTP] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_NORMAL_NODE_RAW,
+ IXGBE_FDIR_NORMAL_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_NORMAL_NODE_RAW] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_NORMAL_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+/**
+ * FDIR tunnel graph implementation (VxLAN and NVGRE)
+ * Pattern: START -> [OUTER_ETH] -> (OUTER_IPv4|OUTER_IPv6) -> [UDP] -> (VXLAN|NVGRE) -> INNER_ETH -> [VLAN] -> END
+ * VxLAN: START -> [OUTER_ETH] -> (OUTER_IPv4|OUTER_IPv6) -> UDP -> VXLAN -> INNER_ETH -> [VLAN] -> END
+ * NVGRE: START -> [OUTER_ETH] -> (OUTER_IPv4|OUTER_IPv6) -> NVGRE -> INNER_ETH -> [VLAN] -> END
+ */
+
+enum ixgbe_fdir_tunnel_node_id {
+ IXGBE_FDIR_TUNNEL_NODE_START = RTE_FLOW_NODE_FIRST,
+ IXGBE_FDIR_TUNNEL_NODE_OUTER_ETH,
+ IXGBE_FDIR_TUNNEL_NODE_OUTER_IPV4,
+ IXGBE_FDIR_TUNNEL_NODE_OUTER_IPV6,
+ IXGBE_FDIR_TUNNEL_NODE_UDP,
+ IXGBE_FDIR_TUNNEL_NODE_VXLAN,
+ IXGBE_FDIR_TUNNEL_NODE_NVGRE,
+ IXGBE_FDIR_TUNNEL_NODE_INNER_ETH,
+ IXGBE_FDIR_TUNNEL_NODE_INNER_IPV4,
+ IXGBE_FDIR_TUNNEL_NODE_VLAN,
+ IXGBE_FDIR_TUNNEL_NODE_END,
+ IXGBE_FDIR_TUNNEL_NODE_MAX,
+};
+
+static int
+ixgbe_validate_fdir_tunnel_vxlan(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_vxlan *vxlan_mask = item->mask;
+
+ if (vxlan_mask->hdr.flags) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "VxLAN flags must be masked");
+ }
+
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&vxlan_mask->hdr.vni)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Partial VNI mask not supported");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_tunnel_vxlan(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_vxlan *vxlan_spec = item->spec;
+ const struct rte_flow_item_vxlan *vxlan_mask = item->mask;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ rule->ixgbe_fdir.formatted.tunnel_type = IXGBE_FDIR_VXLAN_TUNNEL_TYPE;
+
+ /* spec is optional */
+ if (vxlan_spec != NULL) {
+ rule->b_spec = TRUE;
+ memcpy(((uint8_t *)&rule->ixgbe_fdir.formatted.tni_vni), vxlan_spec->hdr.vni,
+ RTE_DIM(vxlan_spec->hdr.vni));
+ }
+
+ rule->b_mask = TRUE;
+ rule->mask.tunnel_type_mask = 1;
+ memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->hdr.vni, RTE_DIM(vxlan_mask->hdr.vni));
+
+ return 0;
+}
+
+static int
+ixgbe_validate_fdir_tunnel_nvgre(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_nvgre *nvgre_mask;
+
+ nvgre_mask = item->mask;
+
+ if (nvgre_mask->flow_id) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "NVGRE flow ID must not be masked");
+ }
+
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&nvgre_mask->protocol)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "NVGRE protocol must be fully masked or unmasked");
+ }
+
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&nvgre_mask->c_k_s_rsvd0_ver)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "NVGRE flags must be fully masked or unmasked");
+ }
+
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&nvgre_mask->tni)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Partial TNI mask not supported");
+ }
+
+ /* if spec is present, validate flags and protocol values */
+ if (item->spec) {
+ const struct rte_flow_item_nvgre *nvgre_spec = item->spec;
+
+ if (nvgre_mask->c_k_s_rsvd0_ver &&
+ nvgre_spec->c_k_s_rsvd0_ver != rte_cpu_to_be_16(0x2000)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "NVGRE flags must be 0x2000");
+ }
+ if (nvgre_mask->protocol &&
+ nvgre_spec->protocol != rte_cpu_to_be_16(0x6558)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "NVGRE protocol must be 0x6558");
+ }
+ }
+
+ return 0;
+}
+
+#define NVGRE_FLAGS 0x2000
+#define NVGRE_PROTOCOL 0x6558
+static int
+ixgbe_process_fdir_tunnel_nvgre(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_nvgre *nvgre_spec = item->spec;
+ const struct rte_flow_item_nvgre *nvgre_mask = item->mask;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ rule->ixgbe_fdir.formatted.tunnel_type = IXGBE_FDIR_NVGRE_TUNNEL_TYPE;
+
+ /* spec is optional */
+ if (nvgre_spec != NULL) {
+ rule->b_spec = TRUE;
+ memcpy(&fdir_ctx->rule.ixgbe_fdir.formatted.tni_vni,
+ nvgre_spec->tni, RTE_DIM(nvgre_spec->tni));
+ }
+
+ rule->b_mask = TRUE;
+ rule->mask.tunnel_type_mask = 1;
+ memcpy(&rule->mask.tunnel_id_mask, nvgre_mask->tni, RTE_DIM(nvgre_mask->tni));
+ rule->mask.tunnel_id_mask <<= 8;
+ return 0;
+}
+
+static int
+ixgbe_validate_fdir_tunnel_inner_eth(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_eth *eth_mask = item->mask;
+
+ if (eth_mask->hdr.ether_type) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Ether type mask not supported");
+ }
+
+ /* src addr must not be masked */
+ if (!CI_FIELD_IS_ZERO(ð_mask->hdr.src_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Masking not supported for src MAC address");
+ }
+
+ /* dst addr must be either fully masked or fully unmasked */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(ð_mask->hdr.dst_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Partial masks not supported for dst MAC address");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_tunnel_inner_eth(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_eth *eth_spec = item->spec;
+ const struct rte_flow_item_eth *eth_mask = item->mask;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+ uint8_t j;
+
+ /* spec is optional */
+ if (eth_spec != NULL) {
+ rule->b_spec = TRUE;
+ memcpy(&rule->ixgbe_fdir.formatted.inner_mac, eth_spec->hdr.src_addr.addr_bytes,
+ RTE_ETHER_ADDR_LEN);
+ }
+
+ rule->b_mask = TRUE;
+ rule->mask.mac_addr_byte_mask = 0;
+ for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) {
+ if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) {
+ rule->mask.mac_addr_byte_mask |= 0x1 << j;
+ }
+ }
+
+ /* When no vlan, considered as full mask. */
+ rule->mask.vlan_tci_mask = IXGBE_FDIR_VLAN_TCI_MASK;
+
+ return 0;
+}
+
+static int
+ixgbe_process_fdir_tunnel_vlan(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_flow_item_vlan *vlan_spec = item->spec;
+ const struct rte_flow_item_vlan *vlan_mask = item->mask;
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+
+ rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->hdr.vlan_tci;
+
+ rule->mask.vlan_tci_mask = vlan_mask->hdr.vlan_tci;
+ rule->mask.vlan_tci_mask &= IXGBE_FDIR_VLAN_TCI_MASK;
+
+ return 0;
+}
+
+const struct rte_flow_graph ixgbe_fdir_tunnel_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [IXGBE_FDIR_TUNNEL_NODE_START] = {
+ .name = "START",
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_OUTER_ETH] = {
+ .name = "OUTER_ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_OUTER_IPV4] = {
+ .name = "OUTER_IPV4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_OUTER_IPV6] = {
+ .name = "OUTER_IPV6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_UDP] = {
+ .name = "UDP",
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_VXLAN] = {
+ .name = "VXLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VXLAN,
+ .validate = ixgbe_validate_fdir_tunnel_vxlan,
+ .process = ixgbe_process_fdir_tunnel_vxlan,
+ .constraints = RTE_FLOW_NODE_EXPECT_MASK |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_NVGRE] = {
+ .name = "NVGRE",
+ .type = RTE_FLOW_ITEM_TYPE_NVGRE,
+ .validate = ixgbe_validate_fdir_tunnel_nvgre,
+ .process = ixgbe_process_fdir_tunnel_nvgre,
+ .constraints = RTE_FLOW_NODE_EXPECT_MASK |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_INNER_ETH] = {
+ .name = "INNER_ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .validate = ixgbe_validate_fdir_tunnel_inner_eth,
+ .process = ixgbe_process_fdir_tunnel_inner_eth,
+ .constraints = RTE_FLOW_NODE_EXPECT_MASK |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_INNER_IPV4] = {
+ .name = "INNER_IPV4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_VLAN] = {
+ .name = "VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .process = ixgbe_process_fdir_tunnel_vlan,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [IXGBE_FDIR_TUNNEL_NODE_START] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_TUNNEL_NODE_OUTER_ETH,
+ IXGBE_FDIR_TUNNEL_NODE_OUTER_IPV4,
+ IXGBE_FDIR_TUNNEL_NODE_OUTER_IPV6,
+ IXGBE_FDIR_TUNNEL_NODE_UDP,
+ IXGBE_FDIR_TUNNEL_NODE_VXLAN,
+ IXGBE_FDIR_TUNNEL_NODE_NVGRE,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_OUTER_ETH] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_TUNNEL_NODE_OUTER_IPV4,
+ IXGBE_FDIR_TUNNEL_NODE_OUTER_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_OUTER_IPV4] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_TUNNEL_NODE_UDP,
+ IXGBE_FDIR_TUNNEL_NODE_NVGRE,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_OUTER_IPV6] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_TUNNEL_NODE_UDP,
+ IXGBE_FDIR_TUNNEL_NODE_NVGRE,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_UDP] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_TUNNEL_NODE_VXLAN,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_VXLAN] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_TUNNEL_NODE_INNER_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_NVGRE] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_TUNNEL_NODE_INNER_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_INNER_ETH] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_TUNNEL_NODE_VLAN,
+ IXGBE_FDIR_TUNNEL_NODE_INNER_IPV4,
+ IXGBE_FDIR_TUNNEL_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [IXGBE_FDIR_TUNNEL_NODE_VLAN] = {
+ .next = (const size_t[]) {
+ IXGBE_FDIR_TUNNEL_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+static int
+ixgbe_fdir_actions_check(const struct ci_flow_actions *parsed_actions,
+ const struct ci_flow_actions_check_param *param __rte_unused,
+ struct rte_flow_error *error)
+{
+ const enum rte_flow_action_type fwd_actions[] = {
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_DROP,
+ RTE_FLOW_ACTION_TYPE_END
+ };
+ const struct rte_flow_action *action, *drop_action = NULL;
+
+ /* do the generic checks first */
+ int ret = ixgbe_flow_actions_check(parsed_actions, param, error);
+ if (ret)
+ return ret;
+
+ /* first action must be a forwarding action */
+ action = parsed_actions->actions[0];
+ if (!ci_flow_action_type_in_list(action->type, fwd_actions)) {
+ return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ action, "First action must be QUEUE or DROP");
+ }
+ /* remember if we have a drop action */
+ if (action->type == RTE_FLOW_ACTION_TYPE_DROP) {
+ drop_action = action;
+ }
+
+ /* second action, if specified, must not be a forwarding action */
+ action = parsed_actions->actions[1];
+ if (action != NULL && ci_flow_action_type_in_list(action->type, fwd_actions)) {
+ return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ action, "Conflicting actions");
+ }
+ /* if we didn't have a drop action before but now we do, remember that */
+ if (drop_action == NULL && action != NULL && action->type == RTE_FLOW_ACTION_TYPE_DROP) {
+ drop_action = action;
+ }
+ /* drop must be the only action */
+ if (drop_action != NULL && action != NULL) {
+ return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ action, "Conflicting actions");
+ }
+ return 0;
+}
+
+static int
+ixgbe_flow_fdir_ctx_validate(struct ci_flow_engine_ctx *ctx, struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(ctx->dev->data->dev_private);
+ struct ixgbe_fdir_ctx *fdir_ctx = (struct ixgbe_fdir_ctx *)ctx;
+ struct rte_eth_fdir_conf *global_fdir_conf = IXGBE_DEV_FDIR_CONF(ctx->dev);
+
+ /* DROP action cannot be used with signature matches */
+ if ((fdir_ctx->rule.mode == RTE_FDIR_MODE_SIGNATURE) &&
+ (fdir_ctx->fwd_action->type == RTE_FLOW_ACTION_TYPE_DROP)) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "DROP action not supported with signature mode");
+ }
+
+ /* 82599 does not support port drop with port match */
+ if (hw->mac.type == ixgbe_mac_82599EB &&
+ fdir_ctx->fwd_action->type == RTE_FLOW_ACTION_TYPE_DROP &&
+ (fdir_ctx->rule.ixgbe_fdir.formatted.src_port != 0 ||
+ fdir_ctx->rule.ixgbe_fdir.formatted.dst_port != 0)) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, fdir_ctx->fwd_action,
+ "82599 does not support drop action with port match.");
+ }
+
+ /* check for conflicting filter modes */
+ if (global_fdir_conf->mode != fdir_ctx->rule.mode) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Conflicting filter modes");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_flow_fdir_ctx_parse_common(const struct rte_flow_action *actions,
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = (struct ixgbe_fdir_ctx *)ctx;
+ struct ci_flow_actions parsed_actions;
+ struct ci_flow_actions_check_param ap_param = {
+ .allowed_types = (const enum rte_flow_action_type[]){
+ /* queue/mark/drop allowed here */
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_DROP,
+ RTE_FLOW_ACTION_TYPE_MARK,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .driver_ctx = ctx->dev,
+ .check = ixgbe_fdir_actions_check
+ };
+ struct ixgbe_fdir_rule *rule = &fdir_ctx->rule;
+ int ret;
+
+ /* validate attributes */
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret)
+ return ret;
+
+ /* parse requested actions */
+ ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
+ if (ret)
+ return ret;
+
+ fdir_ctx->fwd_action = parsed_actions.actions[0];
+ /* can be NULL */
+ fdir_ctx->aux_action = parsed_actions.actions[1];
+
+ /* set up forward/drop action */
+ if (fdir_ctx->fwd_action->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+ const struct rte_flow_action_queue *q_act = fdir_ctx->fwd_action->conf;
+ rule->queue = q_act->index;
+ } else {
+ rule->fdirflags = IXGBE_FDIRCMD_DROP;
+ }
+
+ /* set up mark action */
+ if (fdir_ctx->aux_action != NULL && fdir_ctx->aux_action->type == RTE_FLOW_ACTION_TYPE_MARK) {
+ const struct rte_flow_action_mark *m_act = fdir_ctx->aux_action->conf;
+ rule->soft_id = m_act->id;
+ }
+
+ return ret;
+}
+
+static int
+ixgbe_flow_fdir_ctx_parse(const struct rte_flow_action *actions,
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(ctx->dev->data->dev_private);
+ struct ixgbe_fdir_ctx *fdir_ctx = (struct ixgbe_fdir_ctx *)ctx;
+ int ret;
+
+ /* call into common part first */
+ ret = ixgbe_flow_fdir_ctx_parse_common(actions, attr, ctx, error);
+ if (ret)
+ return ret;
+
+ /* some hardware does not support SCTP matching */
+ if (hw->mac.type == ixgbe_mac_X550 ||
+ hw->mac.type == ixgbe_mac_X550EM_x ||
+ hw->mac.type == ixgbe_mac_X550EM_a ||
+ hw->mac.type == ixgbe_mac_E610)
+ fdir_ctx->supports_sctp_ports = true;
+
+ /*
+ * Some fields may not be provided. Set spec to 0 and mask to default
+ * value. So, we need not do anything for the not provided fields later.
+ */
+ memset(&fdir_ctx->rule.mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+ fdir_ctx->rule.mask.vlan_tci_mask = 0;
+ fdir_ctx->rule.mask.flex_bytes_mask = 0;
+ fdir_ctx->rule.mask.dst_port_mask = 0;
+ fdir_ctx->rule.mask.src_port_mask = 0;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_fdir_tunnel_ctx_parse(const struct rte_flow_action *actions,
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct ixgbe_fdir_ctx *fdir_ctx = (struct ixgbe_fdir_ctx *)ctx;
+ int ret;
+
+ /* call into common part first */
+ ret = ixgbe_flow_fdir_ctx_parse_common(actions, attr, ctx, error);
+ if (ret)
+ return ret;
+
+ /**
+ * Some fields may not be provided. Set spec to 0 and mask to default
+ * value. So, we need not do anything for the not provided fields later.
+ */
+ memset(&fdir_ctx->rule.mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+ fdir_ctx->rule.mask.vlan_tci_mask = 0;
+
+ fdir_ctx->rule.mode = RTE_FDIR_MODE_PERFECT_TUNNEL;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_fdir_ctx_to_flow(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct ixgbe_fdir_ctx *fdir_ctx = (const struct ixgbe_fdir_ctx *)ctx;
+ struct ixgbe_fdir_flow *fdir_flow = (struct ixgbe_fdir_flow *)flow;
+
+ fdir_flow->rule = fdir_ctx->rule;
+
+ return 0;
+}
+
+/* 1 if needs mask install, 0 if doesn't, -1 if incompatible */
+static int
+ixgbe_flow_fdir_needs_mask_install(struct ixgbe_fdir_flow *fdir_flow)
+{
+ struct ixgbe_adapter *adapter = fdir_flow->flow.flow.dev->data->dev_private;
+ struct ixgbe_hw_fdir_info *global_fdir_info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
+ struct ixgbe_fdir_rule *rule = &fdir_flow->rule;
+ int ret;
+
+ /* if rule doesn't have a mask, don't do anything */
+ if (rule->b_mask == 0)
+ return 0;
+
+ /* rule has a mask, check if global config doesn't */
+ if (!global_fdir_info->mask_added)
+ return 1;
+
+ /* global config has a mask, check if it matches */
+ ret = memcmp(&global_fdir_info->mask, &rule->mask, sizeof(rule->mask));
+ if (ret)
+ return -1;
+
+ /* does rule specify flex bytes mask? */
+ if (rule->mask.flex_bytes_mask == 0)
+ /* compatible */
+ return 0;
+
+ /* if flex bytes mask is set, check if offset matches */
+ if (global_fdir_info->flex_bytes_offset != rule->flex_bytes_offset)
+ return -1;
+
+ /* compatible */
+ return 0;
+}
+
+static int
+ixgbe_flow_fdir_install_mask(struct ixgbe_fdir_flow *fdir_flow, struct rte_flow_error *error)
+{
+ struct rte_eth_dev *dev = fdir_flow->flow.flow.dev;
+ struct ixgbe_adapter *adapter = dev->data->dev_private;
+ struct ixgbe_hw_fdir_info *global_fdir_info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
+ struct ixgbe_fdir_rule *rule = &fdir_flow->rule;
+ int ret;
+
+ /* store mask */
+ global_fdir_info->mask = rule->mask;
+
+ /* do we need flex byte mask? */
+ if (rule->mask.flex_bytes_mask != 0) {
+ ret = ixgbe_fdir_set_flexbytes_offset(dev, rule->flex_bytes_offset);
+ if (ret != 0) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to set flex bytes offset");
+ }
+ }
+
+ /* set mask */
+ ret = ixgbe_fdir_set_input_mask(dev);
+ if (ret != 0) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to set input mask");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_flow_fdir_flow_install(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct rte_eth_dev *dev = flow->dev;
+ struct ixgbe_adapter *adapter = dev->data->dev_private;
+ struct rte_eth_fdir_conf *global_fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+ struct ixgbe_hw_fdir_info *global_fdir_info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
+ struct ixgbe_fdir_flow *fdir_flow = (struct ixgbe_fdir_flow *)flow;
+ struct ixgbe_fdir_rule *rule = &fdir_flow->rule;
+ bool mask_installed = false;
+ int ret;
+
+ /* if flow director isn't configured, configure it */
+ if (global_fdir_conf->mode == RTE_FDIR_MODE_NONE) {
+ global_fdir_conf->mode = rule->mode;
+ ret = ixgbe_fdir_configure(dev);
+ if (ret) {
+ global_fdir_conf->mode = RTE_FDIR_MODE_NONE;
+
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to configure flow director");
+ }
+ }
+
+ /* check if we need to install the mask first */
+ ret = ixgbe_flow_fdir_needs_mask_install(fdir_flow);
+ if (ret < 0) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Flow mask is incompatible with existing rules");
+ } else if (ret > 0) {
+ /* no mask yet, install it */
+ ret = ixgbe_flow_fdir_install_mask(fdir_flow, error);
+ if (ret != 0)
+ return ret;
+ mask_installed = true;
+ }
+
+ /* now install the rule */
+ if (rule->b_spec) {
+ ret = ixgbe_fdir_filter_program(dev, rule, FALSE, FALSE);
+ if (ret) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to program flow director filter");
+ }
+ }
+
+ /* if we installed a mask, mark it as installed */
+ if (mask_installed)
+ global_fdir_info->mask_added = TRUE;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_fdir_flow_uninstall(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct rte_eth_dev *dev = flow->dev;
+ struct ixgbe_adapter *adapter = dev->data->dev_private;
+ struct rte_eth_fdir_conf *global_fdir_conf = IXGBE_DEV_FDIR_CONF(dev);
+ struct ixgbe_hw_fdir_info *global_fdir_info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter);
+ struct ixgbe_fdir_flow *fdir_flow = (struct ixgbe_fdir_flow *)flow;
+ struct ixgbe_fdir_rule *rule = &fdir_flow->rule;
+ int ret;
+
+ /* uninstall the rule */
+ ret = ixgbe_fdir_filter_program(dev, rule, TRUE, FALSE);
+ if (ret != 0) {
+ return rte_flow_error_set(error, ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Failed to remove flow director filter");
+ }
+
+ /* when last filter is removed, also remove the mask */
+ if (!TAILQ_EMPTY(&global_fdir_info->fdir_list))
+ return 0;
+
+ global_fdir_info->mask_added = FALSE;
+ global_fdir_info->mask = (struct ixgbe_hw_fdir_mask){0};
+ global_fdir_conf->mode = RTE_FDIR_MODE_NONE;
+
+ return 0;
+}
+
+static bool
+ixgbe_flow_fdir_is_available(const struct ci_flow_engine *engine __rte_unused,
+ const struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ return hw->mac.type == ixgbe_mac_82599EB ||
+ hw->mac.type == ixgbe_mac_X540 ||
+ hw->mac.type == ixgbe_mac_X550 ||
+ hw->mac.type == ixgbe_mac_X550EM_x ||
+ hw->mac.type == ixgbe_mac_X550EM_a ||
+ hw->mac.type == ixgbe_mac_E610;
+}
+
+static bool
+ixgbe_flow_fdir_tunnel_is_available(const struct ci_flow_engine *engine __rte_unused,
+ const struct rte_eth_dev *dev)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ return hw->mac.type == ixgbe_mac_X550 ||
+ hw->mac.type == ixgbe_mac_X550EM_x ||
+ hw->mac.type == ixgbe_mac_X550EM_a ||
+ hw->mac.type == ixgbe_mac_E610;
+}
+
+const struct ci_flow_engine_ops ixgbe_fdir_ops = {
+ .is_available = ixgbe_flow_fdir_is_available,
+ .ctx_parse = ixgbe_flow_fdir_ctx_parse,
+ .ctx_validate = ixgbe_flow_fdir_ctx_validate,
+ .ctx_to_flow = ixgbe_flow_fdir_ctx_to_flow,
+ .flow_install = ixgbe_flow_fdir_flow_install,
+ .flow_uninstall = ixgbe_flow_fdir_flow_uninstall,
+};
+
+const struct ci_flow_engine_ops ixgbe_fdir_tunnel_ops = {
+ .is_available = ixgbe_flow_fdir_tunnel_is_available,
+ .ctx_parse = ixgbe_flow_fdir_tunnel_ctx_parse,
+ .ctx_validate = ixgbe_flow_fdir_ctx_validate,
+ .ctx_to_flow = ixgbe_flow_fdir_ctx_to_flow,
+ .flow_install = ixgbe_flow_fdir_flow_install,
+ .flow_uninstall = ixgbe_flow_fdir_flow_uninstall,
+};
+
+const struct ci_flow_engine ixgbe_fdir_flow_engine = {
+ .name = "ixgbe_fdir",
+ .ctx_size = sizeof(struct ixgbe_fdir_ctx),
+ .flow_size = sizeof(struct ixgbe_fdir_flow),
+ .type = IXGBE_FLOW_ENGINE_TYPE_FDIR,
+ .ops = &ixgbe_fdir_ops,
+ .graph = &ixgbe_fdir_normal_graph,
+};
+
+const struct ci_flow_engine ixgbe_fdir_tunnel_flow_engine = {
+ .name = "ixgbe_fdir_tunnel",
+ .ctx_size = sizeof(struct ixgbe_fdir_ctx),
+ .flow_size = sizeof(struct ixgbe_fdir_flow),
+ .type = IXGBE_FLOW_ENGINE_TYPE_FDIR_TUNNEL,
+ .ops = &ixgbe_fdir_tunnel_ops,
+ .graph = &ixgbe_fdir_tunnel_graph,
+};
diff --git a/drivers/net/intel/ixgbe/meson.build b/drivers/net/intel/ixgbe/meson.build
index 65ffe19939..770125350e 100644
--- a/drivers/net/intel/ixgbe/meson.build
+++ b/drivers/net/intel/ixgbe/meson.build
@@ -16,6 +16,7 @@ sources += files(
'ixgbe_flow_l2tun.c',
'ixgbe_flow_ntuple.c',
'ixgbe_flow_security.c',
+ 'ixgbe_flow_fdir.c',
'ixgbe_ipsec.c',
'ixgbe_pf.c',
'ixgbe_rxtx.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 11/21] net/ixgbe: reimplement hash parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (9 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 10/21] net/ixgbe: reimplement FDIR parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 12/21] net/i40e: add support for common flow parsing Anatoly Burakov
` (10 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Vladimir Medvedkin
Use the new flow graph API and the common parsing framework to implement
flow parser for RSS configuration.
RSS flow parser doesn't really parse any "flows", it was completely
ignoring the flow patterns and was only looking at actions. It will
therefore not specify a pattern graph and will thus match NULL pattern,
empty patterns (START -> END), and ANY patterns (START -> ANY -> END).
RSS was the last engine using "filter list", so that is now removed.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/ixgbe/ixgbe_ethdev.c | 6 -
drivers/net/intel/ixgbe/ixgbe_ethdev.h | 5 +-
drivers/net/intel/ixgbe/ixgbe_flow.c | 277 +---------------------
drivers/net/intel/ixgbe/ixgbe_flow.h | 2 +
drivers/net/intel/ixgbe/ixgbe_flow_hash.c | 182 ++++++++++++++
drivers/net/intel/ixgbe/meson.build | 1 +
6 files changed, 193 insertions(+), 280 deletions(-)
create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_hash.c
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
index 442e4d96c6..7ecfebdb25 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c
@@ -1334,9 +1334,6 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
if (ret)
goto err_l2_tn_filter_init;
- /* initialize flow filter lists */
- ixgbe_filterlist_init();
-
/* initialize bandwidth configuration info */
memset(bw_conf, 0, sizeof(struct ixgbe_bw_conf));
@@ -3141,9 +3138,6 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
/* Remove all ntuple filters of the device */
ixgbe_ntuple_filter_uninit(dev);
- /* clear all the filters list */
- ixgbe_filterlist_flush();
-
/* Remove all Traffic Manager configuration */
ixgbe_tm_conf_uninit(dev);
diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
index 17b9fa918f..182b013dc4 100644
--- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h
@@ -346,10 +346,9 @@ struct ixgbe_l2_tn_info {
uint16_t e_tag_ether_type; /* ether type for e-tag */
};
+/* no driver-specific data needed */
struct rte_flow {
struct ci_flow flow;
- enum rte_filter_type filter_type;
- void *rule;
};
struct ixgbe_macsec_setting {
@@ -691,8 +690,6 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
int
ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
struct ixgbe_l2_tunnel_conf *l2_tunnel);
-void ixgbe_filterlist_init(void);
-void ixgbe_filterlist_flush(void);
/*
* Flow director function prototypes
*/
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c
index ea32025079..9efad51177 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.c
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.c
@@ -48,23 +48,6 @@
#include "../common/flow_engine.h"
#include "ixgbe_flow.h"
-/* rss filter list structure */
-struct ixgbe_rss_conf_ele {
- TAILQ_ENTRY(ixgbe_rss_conf_ele) entries;
- struct ixgbe_rte_flow_rss_conf filter_info;
-};
-/* ixgbe_flow memory list structure */
-struct ixgbe_flow_mem {
- TAILQ_ENTRY(ixgbe_flow_mem) entries;
- struct rte_flow *flow;
-};
-
-TAILQ_HEAD(ixgbe_rss_filter_list, ixgbe_rss_conf_ele);
-TAILQ_HEAD(ixgbe_flow_mem_list, ixgbe_flow_mem);
-
-static struct ixgbe_rss_filter_list filter_rss_list;
-static struct ixgbe_flow_mem_list ixgbe_flow_list;
-
const struct ci_flow_engine_list ixgbe_flow_engine_list = {
{
&ixgbe_ethertype_flow_engine,
@@ -74,6 +57,7 @@ const struct ci_flow_engine_list ixgbe_flow_engine_list = {
&ixgbe_security_flow_engine,
&ixgbe_fdir_flow_engine,
&ixgbe_fdir_tunnel_flow_engine,
+ &ixgbe_hash_flow_engine,
},
};
/*
@@ -131,102 +115,6 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions,
* normally the packets should use network order.
*/
-/* Flow actions check specific to RSS filter */
-static int
-ixgbe_flow_actions_check_rss(const struct ci_flow_actions *parsed_actions,
- const struct ci_flow_actions_check_param *param,
- struct rte_flow_error *error)
-{
- const struct rte_flow_action *action = parsed_actions->actions[0];
- const struct rte_flow_action_rss *rss_act = action->conf;
- struct rte_eth_dev *dev = param->driver_ctx;
- const size_t rss_key_len = sizeof(((struct ixgbe_rte_flow_rss_conf *)0)->key);
- size_t q_idx, q;
-
- /* check if queue list is not empty */
- if (rss_act->queue_num == 0) {
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS queue list is empty");
- }
-
- /* check if each RSS queue is valid */
- for (q_idx = 0; q_idx < rss_act->queue_num; q_idx++) {
- q = rss_act->queue[q_idx];
- if (q >= dev->data->nb_rx_queues) {
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "Invalid RSS queue specified");
- }
- }
-
- /* only support default hash function */
- if (rss_act->func != RTE_ETH_HASH_FUNCTION_DEFAULT) {
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "Non-default RSS hash functions are not supported");
- }
- /* levels aren't supported */
- if (rss_act->level) {
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "A nonzero RSS encapsulation level is not supported");
- }
- /* check key length */
- if (rss_act->key_len != 0 && rss_act->key_len != rss_key_len) {
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS key must be exactly 40 bytes long");
- }
- /* filter out unsupported RSS types */
- if ((rss_act->types & ~IXGBE_RSS_OFFLOAD_ALL) != 0) {
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "Invalid RSS type specified");
- }
- return 0;
-}
-
-static int
-ixgbe_parse_rss_filter(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action actions[],
- struct ixgbe_rte_flow_rss_conf *rss_conf,
- struct rte_flow_error *error)
-{
- struct ci_flow_actions parsed_actions;
- struct ci_flow_actions_check_param ap_param = {
- .allowed_types = (const enum rte_flow_action_type[]){
- /* only rss allowed here */
- RTE_FLOW_ACTION_TYPE_RSS,
- RTE_FLOW_ACTION_TYPE_END
- },
- .driver_ctx = dev,
- .check = ixgbe_flow_actions_check_rss,
- .max_actions = 1,
- };
- int ret;
- const struct rte_flow_action *action;
-
- /* validate attributes */
- ret = ci_flow_check_attr(attr, NULL, error);
- if (ret)
- return ret;
-
- /* parse requested actions */
- ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
- if (ret)
- return ret;
- action = parsed_actions.actions[0];
-
- if (ixgbe_rss_conf_init(rss_conf, action->conf))
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, NULL,
- "RSS context initialization failure");
-
- return 0;
-}
-
/* remove the rss filter */
static void
ixgbe_clear_rss_filter(struct rte_eth_dev *dev)
@@ -238,35 +126,6 @@ ixgbe_clear_rss_filter(struct rte_eth_dev *dev)
ixgbe_config_rss_filter(dev, &filter_info->rss_info, FALSE);
}
-void
-ixgbe_filterlist_init(void)
-{
- TAILQ_INIT(&filter_rss_list);
- TAILQ_INIT(&ixgbe_flow_list);
-}
-
-void
-ixgbe_filterlist_flush(void)
-{
- struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
- struct ixgbe_rss_conf_ele *rss_filter_ptr;
-
- while ((rss_filter_ptr = TAILQ_FIRST(&filter_rss_list))) {
- TAILQ_REMOVE(&filter_rss_list,
- rss_filter_ptr,
- entries);
- rte_free(rss_filter_ptr);
- }
-
- while ((ixgbe_flow_mem_ptr = TAILQ_FIRST(&ixgbe_flow_list))) {
- TAILQ_REMOVE(&ixgbe_flow_list,
- ixgbe_flow_mem_ptr,
- entries);
- rte_free(ixgbe_flow_mem_ptr->flow);
- rte_free(ixgbe_flow_mem_ptr);
- }
-}
-
/**
* Create or destroy a flow rule.
* Theorically one rule can match more than one filters.
@@ -281,68 +140,9 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct ixgbe_adapter *ad = dev->data->dev_private;
- int ret;
- struct ixgbe_rte_flow_rss_conf rss_conf;
- struct rte_flow *flow = NULL;
- struct ixgbe_rss_conf_ele *rss_filter_ptr;
- struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
- /* try the new flow engine first */
- flow = ci_flow_create(&ad->flow_engine_conf, &ixgbe_flow_engine_list,
+ return ci_flow_create(&ad->flow_engine_conf, &ixgbe_flow_engine_list,
attr, pattern, actions, error);
- if (flow != NULL) {
- return flow;
- }
-
- /* fall back to legacy flow engines */
-
- flow = rte_zmalloc("ixgbe_rte_flow", sizeof(struct rte_flow), 0);
- if (!flow) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- return (struct rte_flow *)flow;
- }
- ixgbe_flow_mem_ptr = rte_zmalloc("ixgbe_flow_mem",
- sizeof(struct ixgbe_flow_mem), 0);
- if (!ixgbe_flow_mem_ptr) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- rte_free(flow);
- return NULL;
- }
- ixgbe_flow_mem_ptr->flow = flow;
- TAILQ_INSERT_TAIL(&ixgbe_flow_list,
- ixgbe_flow_mem_ptr, entries);
-
- memset(&rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf));
- ret = ixgbe_parse_rss_filter(dev, attr,
- actions, &rss_conf, error);
- if (!ret) {
- ret = ixgbe_config_rss_filter(dev, &rss_conf, TRUE);
- if (!ret) {
- rss_filter_ptr = rte_zmalloc("ixgbe_rss_filter",
- sizeof(struct ixgbe_rss_conf_ele), 0);
- if (!rss_filter_ptr) {
- PMD_DRV_LOG(ERR, "failed to allocate memory");
- goto out;
- }
- ixgbe_rss_conf_init(&rss_filter_ptr->filter_info,
- &rss_conf.conf);
- TAILQ_INSERT_TAIL(&filter_rss_list,
- rss_filter_ptr, entries);
- flow->rule = rss_filter_ptr;
- flow->filter_type = RTE_ETH_FILTER_HASH;
- return flow;
- }
- }
-
-out:
- TAILQ_REMOVE(&ixgbe_flow_list,
- ixgbe_flow_mem_ptr, entries);
- rte_flow_error_set(error, -ret,
- RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Failed to create flow.");
- rte_free(ixgbe_flow_mem_ptr);
- rte_free(flow);
- return NULL;
}
/**
@@ -358,22 +158,9 @@ ixgbe_flow_validate(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct ixgbe_adapter *ad = dev->data->dev_private;
- struct ixgbe_rte_flow_rss_conf rss_conf;
- int ret;
- /* try the new flow engine first */
- ret = ci_flow_validate(&ad->flow_engine_conf, &ixgbe_flow_engine_list,
+ return ci_flow_validate(&ad->flow_engine_conf, &ixgbe_flow_engine_list,
attr, pattern, actions, error);
- if (ret == 0)
- return ret;
-
- /* fall back to legacy engines */
-
- memset(&rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf));
- ret = ixgbe_parse_rss_filter(dev, attr,
- actions, &rss_conf, error);
-
- return ret;
}
/* Destroy a flow rule on ixgbe. */
@@ -383,58 +170,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct ixgbe_adapter *ad = dev->data->dev_private;
- int ret;
- struct rte_flow *pmd_flow = flow;
- enum rte_filter_type filter_type = pmd_flow->filter_type;
- struct ixgbe_flow_mem *ixgbe_flow_mem_ptr;
- struct ixgbe_rss_conf_ele *rss_filter_ptr;
- /* try the new flow engine first */
- ret = ci_flow_destroy(&ad->flow_engine_conf,
- &ixgbe_flow_engine_list, flow, error);
- if (ret == 0) {
- return 0;
- }
-
- /* fall back to legacy engines */
-
- switch (filter_type) {
- case RTE_ETH_FILTER_HASH:
- rss_filter_ptr = (struct ixgbe_rss_conf_ele *)
- pmd_flow->rule;
- ret = ixgbe_config_rss_filter(dev,
- &rss_filter_ptr->filter_info, FALSE);
- if (!ret) {
- TAILQ_REMOVE(&filter_rss_list,
- rss_filter_ptr, entries);
- rte_free(rss_filter_ptr);
- }
- break;
- default:
- PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
- filter_type);
- ret = -EINVAL;
- break;
- }
-
- if (ret) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_HANDLE,
- NULL, "Failed to destroy flow");
- return ret;
- }
-
- TAILQ_FOREACH(ixgbe_flow_mem_ptr, &ixgbe_flow_list, entries) {
- if (ixgbe_flow_mem_ptr->flow == pmd_flow) {
- TAILQ_REMOVE(&ixgbe_flow_list,
- ixgbe_flow_mem_ptr, entries);
- rte_free(ixgbe_flow_mem_ptr);
- break;
- }
- }
- rte_free(flow);
-
- return ret;
+ return ci_flow_destroy(&ad->flow_engine_conf, &ixgbe_flow_engine_list, flow, error);
}
/* Destroy all flow rules associated with a port on ixgbe. */
@@ -445,13 +182,15 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
struct ixgbe_adapter *ad = dev->data->dev_private;
int ret = 0;
- /* flush all flows from the new flow engine */
+ /* flush the flow engine */
ret = ci_flow_flush(&ad->flow_engine_conf, &ixgbe_flow_engine_list, error);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to flush flow");
return ret;
}
+ /* normally this shouldn't be necessary */
+
ixgbe_clear_all_ntuple_filter(dev);
ixgbe_clear_all_ethertype_filter(dev);
ixgbe_clear_syn_filter(dev);
@@ -472,8 +211,6 @@ ixgbe_flow_flush(struct rte_eth_dev *dev,
ixgbe_clear_rss_filter(dev);
- ixgbe_filterlist_flush();
-
return 0;
}
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.h b/drivers/net/intel/ixgbe/ixgbe_flow.h
index 91ee5106e3..7a5dd09551 100644
--- a/drivers/net/intel/ixgbe/ixgbe_flow.h
+++ b/drivers/net/intel/ixgbe/ixgbe_flow.h
@@ -16,6 +16,7 @@ enum ixgbe_flow_engine_type {
IXGBE_FLOW_ENGINE_TYPE_SECURITY,
IXGBE_FLOW_ENGINE_TYPE_FDIR,
IXGBE_FLOW_ENGINE_TYPE_FDIR_TUNNEL,
+ IXGBE_FLOW_ENGINE_TYPE_HASH,
};
int
@@ -32,5 +33,6 @@ extern const struct ci_flow_engine ixgbe_ntuple_flow_engine;
extern const struct ci_flow_engine ixgbe_security_flow_engine;
extern const struct ci_flow_engine ixgbe_fdir_flow_engine;
extern const struct ci_flow_engine ixgbe_fdir_tunnel_flow_engine;
+extern const struct ci_flow_engine ixgbe_hash_flow_engine;
#endif /* _IXGBE_FLOW_H_ */
diff --git a/drivers/net/intel/ixgbe/ixgbe_flow_hash.c b/drivers/net/intel/ixgbe/ixgbe_flow_hash.c
new file mode 100644
index 0000000000..db9678b914
--- /dev/null
+++ b/drivers/net/intel/ixgbe/ixgbe_flow_hash.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include <rte_common.h>
+#include <rte_flow.h>
+#include <rte_flow_graph.h>
+#include <rte_ether.h>
+
+#include "ixgbe_ethdev.h"
+#include "ixgbe_flow.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+#include "../common/flow_engine.h"
+
+struct ixgbe_hash_flow {
+ struct rte_flow flow;
+ struct ixgbe_rte_flow_rss_conf rss_conf;
+};
+
+struct ixgbe_hash_ctx {
+ struct ci_flow_engine_ctx base;
+ struct ixgbe_rte_flow_rss_conf rss_conf;
+};
+
+/* Flow actions check specific to RSS filter */
+static int
+ixgbe_flow_actions_check_rss(const struct ci_flow_actions *parsed_actions,
+ const struct ci_flow_actions_check_param *param,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action *action = parsed_actions->actions[0];
+ const struct rte_flow_action_rss *rss_act = action->conf;
+ const struct rte_eth_dev *dev = param->driver_ctx;
+ const size_t rss_key_len = sizeof(((struct ixgbe_rte_flow_rss_conf *)0)->key);
+
+ /* check if queue list is not empty */
+ if (rss_act->queue_num == 0) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS queue list is empty");
+ }
+
+ if (rss_act->queue[rss_act->queue_num - 1] >= dev->data->nb_rx_queues) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "Invalid RSS queue specified");
+ }
+
+ /* only support default hash function */
+ if (rss_act->func != RTE_ETH_HASH_FUNCTION_DEFAULT) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "Non-default RSS hash functions are not supported");
+ }
+ /* levels aren't supported */
+ if (rss_act->level) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "A nonzero RSS encapsulation level is not supported");
+ }
+ /* check key length */
+ if (rss_act->key_len != 0 && rss_act->key_len != rss_key_len) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS key must be exactly 40 bytes long");
+ }
+ /* filter out unsupported RSS types */
+ if ((rss_act->types & ~IXGBE_RSS_OFFLOAD_ALL) != 0) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "Invalid RSS type specified");
+ }
+ return 0;
+}
+
+static int
+ixgbe_flow_hash_ctx_parse(const struct rte_flow_action *actions,
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct ci_flow_actions parsed_actions;
+ struct ci_flow_actions_check_param ap_param = {
+ .allowed_types = (const enum rte_flow_action_type[]){
+ /* only rss allowed here */
+ RTE_FLOW_ACTION_TYPE_RSS,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .driver_ctx = ctx->dev,
+ .check = ixgbe_flow_actions_check_rss,
+ .max_actions = 1,
+ };
+ struct ixgbe_hash_ctx *hash_ctx = (struct ixgbe_hash_ctx *)ctx;
+ const struct rte_flow_action_rss *rss_conf;
+ int ret;
+
+ /* validate attributes */
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret)
+ return ret;
+
+ /* parse requested actions */
+ ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error);
+ if (ret)
+ return ret;
+
+ rss_conf = parsed_actions.actions[0]->conf;
+
+ ret = ixgbe_rss_conf_init(&hash_ctx->rss_conf, rss_conf);
+ if (ret) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, rss_conf,
+ "RSS context initialization failure");
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_flow_hash_ctx_to_flow(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct ixgbe_hash_ctx *hash_ctx = (const struct ixgbe_hash_ctx *)ctx;
+ struct ixgbe_hash_flow *hash_flow = (struct ixgbe_hash_flow *)flow;
+
+ hash_flow->rss_conf = hash_ctx->rss_conf;
+
+ return 0;
+}
+
+static int
+ixgbe_flow_hash_flow_install(struct ci_flow *flow, struct rte_flow_error *error)
+{
+ struct ixgbe_hash_flow *hash_flow = (struct ixgbe_hash_flow *)flow;
+ struct rte_eth_dev *dev = flow->dev;
+ int ret;
+
+ ret = ixgbe_config_rss_filter(dev, &hash_flow->rss_conf, TRUE);
+ if (ret != 0) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, flow,
+ "Failed to install RSS filter");
+ }
+ return 0;
+}
+
+static int
+ixgbe_flow_hash_flow_uninstall(struct ci_flow *flow,
+ struct rte_flow_error *error)
+{
+ struct rte_eth_dev *dev = flow->dev;
+ struct ixgbe_adapter *adapter = dev->data->dev_private;
+ struct ixgbe_filter_info *filter_info = IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter);
+ int ret;
+
+ ret = ixgbe_config_rss_filter(dev, &filter_info->rss_info, FALSE);
+ if (ret != 0) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, flow,
+ "Failed to uninstall RSS filter");
+ }
+ return 0;
+}
+
+const struct ci_flow_engine_ops ixgbe_hash_ops = {
+ /* RSS engine always available */
+ .ctx_parse = ixgbe_flow_hash_ctx_parse,
+ .ctx_to_flow = ixgbe_flow_hash_ctx_to_flow,
+ .flow_install = ixgbe_flow_hash_flow_install,
+ .flow_uninstall = ixgbe_flow_hash_flow_uninstall,
+};
+
+const struct ci_flow_engine ixgbe_hash_flow_engine = {
+ .name = "ixgbe_hash",
+ .ctx_size = sizeof(struct ixgbe_hash_ctx),
+ .flow_size = sizeof(struct ixgbe_hash_flow),
+ .type = IXGBE_FLOW_ENGINE_TYPE_HASH,
+ .ops = &ixgbe_hash_ops,
+ /* RSS does not accept patterns */
+};
diff --git a/drivers/net/intel/ixgbe/meson.build b/drivers/net/intel/ixgbe/meson.build
index 770125350e..de35833e48 100644
--- a/drivers/net/intel/ixgbe/meson.build
+++ b/drivers/net/intel/ixgbe/meson.build
@@ -17,6 +17,7 @@ sources += files(
'ixgbe_flow_ntuple.c',
'ixgbe_flow_security.c',
'ixgbe_flow_fdir.c',
+ 'ixgbe_flow_hash.c',
'ixgbe_ipsec.c',
'ixgbe_pf.c',
'ixgbe_rxtx.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 12/21] net/i40e: add support for common flow parsing
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (10 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 11/21] net/ixgbe: reimplement hash parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 13/21] net/i40e: reimplement ethertype parser Anatoly Burakov
` (9 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Implement support for common flow parsing infrastructure in preparation for
migration of flow engines.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 8 ++++++
drivers/net/intel/i40e/i40e_ethdev.h | 5 ++++
drivers/net/intel/i40e/i40e_flow.c | 38 +++++++++++++++++++++++++++-
drivers/net/intel/i40e/i40e_flow.h | 12 +++++++++
4 files changed, 62 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/intel/i40e/i40e_flow.h
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index af736f59be..b71a4fb0d1 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -43,6 +43,9 @@
#include "i40e_regs.h"
#include "rte_pmd_i40e.h"
#include "i40e_hash.h"
+#include "i40e_flow.h"
+
+#include "../common/flow_engine.h"
#define ETH_I40E_FLOATING_VEB_ARG "enable_floating_veb"
#define ETH_I40E_FLOATING_VEB_LIST_ARG "floating_veb_list"
@@ -1845,6 +1848,9 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
/* reset all stats of the device, including pf and main vsi */
i40e_dev_stats_reset(dev);
+ /* initialize flow engine configuration */
+ ci_flow_engine_conf_init(&pf->flow_engine_conf, &i40e_flow_engine_list, dev);
+
return 0;
err_init_fdir_filter_list:
@@ -2773,6 +2779,8 @@ i40e_dev_close(struct rte_eth_dev *dev)
rte_free(p_flow);
}
+ ci_flow_engine_conf_reset(&pf->flow_engine_conf, &i40e_flow_engine_list);
+
/* release the fdir static allocated memory */
i40e_fdir_memory_cleanup(pf);
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 91ad0f8d0e..109ee7f278 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -21,6 +21,8 @@
#include "base/i40e_type.h"
#include "base/virtchnl.h"
+#include "../common/flow_engine.h"
+
#define I40E_AQ_LEN 32
#define I40E_AQ_BUF_SZ 4096
/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */
@@ -278,6 +280,7 @@ enum i40e_flxpld_layer_idx {
* Struct to store flow created.
*/
struct rte_flow {
+ struct ci_flow base;
TAILQ_ENTRY(rte_flow) node;
enum rte_filter_type filter_type;
void *rule;
@@ -1172,6 +1175,8 @@ struct i40e_pf {
/* The floating enable flag for the specific VF */
bool floating_veb_list[I40E_MAX_VF];
struct i40e_flow_list flow_list;
+ /* flow engine configuration */
+ struct ci_flow_engine_conf flow_engine_conf;
bool mpls_replace_flag; /* 1 - MPLS filter replace is done */
bool gtp_replace_flag; /* 1 - GTP-C/U filter replace is done */
bool qinq_replace_flag; /* QINQ filter replace is done */
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index ee48ebf4c3..2f9094bcc7 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -25,9 +25,12 @@
#include "base/i40e_prototype.h"
#include "i40e_ethdev.h"
#include "i40e_hash.h"
+#include "i40e_flow.h"
#include "../common/flow_check.h"
+const struct ci_flow_engine_list i40e_flow_engine_list = {0};
+
#define I40E_IPV6_TC_MASK (0xFF << I40E_FDIR_IPv6_TC_OFFSET)
#define I40E_IPV6_FRAG_HEADER 44
#define I40E_TENANT_ARRAY_NUM 3
@@ -3795,8 +3798,16 @@ i40e_flow_validate(struct rte_eth_dev *dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
+ struct i40e_pf *pf = dev->data->dev_private;
/* creates dummy context */
struct i40e_filter_ctx filter_ctx = {0};
+ int ret;
+
+ /* try the new engine first */
+ ret = ci_flow_validate(&pf->flow_engine_conf, &i40e_flow_engine_list,
+ attr, pattern, actions, error);
+ if (ret == 0)
+ return 0;
return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
}
@@ -3814,6 +3825,12 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
+ /* try the new engine first */
+ flow = ci_flow_create(&pf->flow_engine_conf, &i40e_flow_engine_list,
+ attr, pattern, actions, error);
+ if (flow != NULL)
+ return flow;
+
ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
if (ret < 0)
return NULL;
@@ -3920,6 +3937,12 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret = 0;
+ /* try the new engine first */
+ ret = ci_flow_destroy(&pf->flow_engine_conf, &i40e_flow_engine_list,
+ flow, error);
+ if (ret == 0)
+ return 0;
+
switch (filter_type) {
case RTE_ETH_FILTER_ETHERTYPE:
ret = i40e_flow_destroy_ethertype_filter(pf,
@@ -4064,6 +4087,11 @@ i40e_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
int ret;
+ /* flush the new engine first */
+ ret = ci_flow_flush(&pf->flow_engine_conf, &i40e_flow_engine_list, error);
+ if (ret != 0)
+ return ret;
+
ret = i40e_flow_flush_fdir_filter(pf);
if (ret) {
rte_flow_error_set(error, -ret,
@@ -4213,14 +4241,22 @@ i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
}
static int
-i40e_flow_query(struct rte_eth_dev *dev __rte_unused,
+i40e_flow_query(struct rte_eth_dev *dev,
struct rte_flow *flow,
const struct rte_flow_action *actions,
void *data, struct rte_flow_error *error)
{
+ struct i40e_pf *pf = dev->data->dev_private;
struct i40e_rss_filter *rss_rule = (struct i40e_rss_filter *)flow->rule;
enum rte_filter_type filter_type = flow->filter_type;
struct rte_flow_action_rss *rss_conf = data;
+ int ret;
+
+ /* try the new engine first */
+ ret = ci_flow_query(&pf->flow_engine_conf, &i40e_flow_engine_list,
+ flow, actions, data, error);
+ if (ret == 0)
+ return 0;
if (!rss_rule) {
rte_flow_error_set(error, EINVAL,
diff --git a/drivers/net/intel/i40e/i40e_flow.h b/drivers/net/intel/i40e/i40e_flow.h
new file mode 100644
index 0000000000..c958868661
--- /dev/null
+++ b/drivers/net/intel/i40e/i40e_flow.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#ifndef _I40E_FLOW_H_
+#define _I40E_FLOW_H_
+
+#include "../common/flow_engine.h"
+
+extern const struct ci_flow_engine_list i40e_flow_engine_list;
+
+#endif /* _I40E_FLOW_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 13/21] net/i40e: reimplement ethertype parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (11 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 12/21] net/i40e: add support for common flow parsing Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 14/21] net/i40e: reimplement FDIR parser Anatoly Burakov
` (8 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Use the new flow graph API and the common parsing framework to implement
flow parser for Ethertype.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 1 -
drivers/net/intel/i40e/i40e_flow.c | 284 +------------------
drivers/net/intel/i40e/i40e_flow.h | 8 +
drivers/net/intel/i40e/i40e_flow_ethertype.c | 258 +++++++++++++++++
drivers/net/intel/i40e/meson.build | 1 +
5 files changed, 273 insertions(+), 279 deletions(-)
create mode 100644 drivers/net/intel/i40e/i40e_flow_ethertype.c
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 109ee7f278..118ba8a6c7 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1311,7 +1311,6 @@ extern const struct rte_flow_ops i40e_flow_ops;
struct i40e_filter_ctx {
union {
- struct rte_eth_ethertype_filter ethertype_filter;
struct i40e_fdir_filter_conf fdir_filter;
struct i40e_tunnel_filter_conf consistent_tunnel_filter;
struct i40e_rte_flow_rss_conf rss_conf;
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2f9094bcc7..68155a58b4 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -29,7 +29,11 @@
#include "../common/flow_check.h"
-const struct ci_flow_engine_list i40e_flow_engine_list = {0};
+const struct ci_flow_engine_list i40e_flow_engine_list = {
+ {
+ &i40e_flow_engine_ethertype,
+ }
+};
#define I40E_IPV6_TC_MASK (0xFF << I40E_FDIR_IPv6_TC_OFFSET)
#define I40E_IPV6_FRAG_HEADER 44
@@ -58,15 +62,6 @@ static int i40e_flow_query(struct rte_eth_dev *dev,
struct rte_flow *flow,
const struct rte_flow_action *actions,
void *data, struct rte_flow_error *error);
-static int
-i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
- const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct rte_eth_ethertype_filter *filter);
-static int i40e_flow_parse_ethertype_action(struct rte_eth_dev *dev,
- const struct rte_flow_action *actions,
- struct rte_flow_error *error,
- struct rte_eth_ethertype_filter *filter);
static int i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item *pattern,
struct rte_flow_error *error,
@@ -79,11 +74,6 @@ static int i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
struct i40e_tunnel_filter_conf *filter);
-static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter);
static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
@@ -109,12 +99,9 @@ static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error,
struct i40e_filter_ctx *filter);
-static int i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
- struct i40e_ethertype_filter *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
struct i40e_tunnel_filter *filter);
static int i40e_flow_flush_fdir_filter(struct i40e_pf *pf);
-static int i40e_flow_flush_ethertype_filter(struct i40e_pf *pf);
static int i40e_flow_flush_tunnel_filter(struct i40e_pf *pf);
static int
i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
@@ -984,8 +971,6 @@ static enum rte_flow_item_type pattern_fdir_ipv6_udp_esp[] = {
};
static struct i40e_valid_pattern i40e_supported_patterns[] = {
- /* Ethertype */
- { pattern_ethertype, i40e_flow_parse_ethertype_filter },
/* FDIR - support default flow type without flexible payload*/
{ pattern_ethertype, i40e_flow_parse_fdir_filter },
{ pattern_fdir_ipv4, i40e_flow_parse_fdir_filter },
@@ -1197,7 +1182,7 @@ i40e_find_parse_filter_func(struct rte_flow_item *pattern, uint32_t *idx)
return parse_filter;
}
-static int
+int
i40e_get_outer_vlan(struct rte_eth_dev *dev, uint16_t *tpid)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1224,181 +1209,6 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev, uint16_t *tpid)
return 0;
}
-/* 1. Last in item should be NULL as range is not supported.
- * 2. Supported filter types: MAC_ETHTYPE and ETHTYPE.
- * 3. SRC mac_addr mask should be 00:00:00:00:00:00.
- * 4. DST mac_addr mask should be 00:00:00:00:00:00 or
- * FF:FF:FF:FF:FF:FF
- * 5. Ether_type mask should be 0xFFFF.
- */
-static int
-i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
- const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct rte_eth_ethertype_filter *filter)
-{
- const struct rte_flow_item *item = pattern;
- const struct rte_flow_item_eth *eth_spec;
- const struct rte_flow_item_eth *eth_mask;
- enum rte_flow_item_type item_type;
- int ret;
- uint16_t tpid;
-
- for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Not support range");
- return -rte_errno;
- }
- item_type = item->type;
- switch (item_type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- eth_spec = item->spec;
- eth_mask = item->mask;
- /* Get the MAC info. */
- if (!eth_spec || !eth_mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "NULL ETH spec/mask");
- return -rte_errno;
- }
-
- /* Mask bits of source MAC address must be full of 0.
- * Mask bits of destination MAC address must be full
- * of 1 or full of 0.
- */
- if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr) ||
- (!rte_is_zero_ether_addr(ð_mask->hdr.dst_addr) &&
- !rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr))) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid MAC_addr mask");
- return -rte_errno;
- }
-
- if ((eth_mask->hdr.ether_type & UINT16_MAX) != UINT16_MAX) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ethertype mask");
- return -rte_errno;
- }
-
- /* If mask bits of destination MAC address
- * are full of 1, set RTE_ETHTYPE_FLAGS_MAC.
- */
- if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr)) {
- filter->mac_addr = eth_spec->hdr.dst_addr;
- filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
- } else {
- filter->flags &= ~RTE_ETHTYPE_FLAGS_MAC;
- }
- filter->ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
-
- if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
- filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
- filter->ether_type == RTE_ETHER_TYPE_LLDP) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported ether_type in control packet filter.");
- return -rte_errno;
- }
-
- ret = i40e_get_outer_vlan(dev, &tpid);
- if (ret != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Can not get the Ethertype identifying the L2 tag");
- return -rte_errno;
- }
- if (filter->ether_type == tpid) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported ether_type in"
- " control packet filter.");
- return -rte_errno;
- }
-
- break;
- default:
- break;
- }
- }
-
- return 0;
-}
-
-/* Ethertype action only supports QUEUE or DROP. */
-static int
-i40e_flow_parse_ethertype_action(struct rte_eth_dev *dev,
- const struct rte_flow_action *actions,
- struct rte_flow_error *error,
- struct rte_eth_ethertype_filter *filter)
-{
- struct ci_flow_actions parsed_actions = {0};
- struct ci_flow_actions_check_param ac_param = {
- .allowed_types = (enum rte_flow_action_type[]) {
- RTE_FLOW_ACTION_TYPE_QUEUE,
- RTE_FLOW_ACTION_TYPE_DROP,
- RTE_FLOW_ACTION_TYPE_END,
- },
- .max_actions = 1,
- };
- const struct rte_flow_action *action;
- int ret;
-
- ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error);
- if (ret)
- return ret;
- action = parsed_actions.actions[0];
-
- if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
- const struct rte_flow_action_queue *act_q = action->conf;
- /* check queue index */
- if (act_q->index >= dev->data->nb_rx_queues) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, action,
- "Invalid queue index");
- }
- filter->queue = act_q->index;
- } else if (action->type == RTE_FLOW_ACTION_TYPE_DROP) {
- filter->flags |= RTE_ETHTYPE_FLAGS_DROP;
- }
- return 0;
-}
-
-static int
-i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter)
-{
- struct rte_eth_ethertype_filter *ethertype_filter = &filter->ethertype_filter;
- int ret;
-
- ret = i40e_flow_parse_ethertype_pattern(dev, pattern, error,
- ethertype_filter);
- if (ret)
- return ret;
-
- ret = i40e_flow_parse_ethertype_action(dev, actions, error,
- ethertype_filter);
- if (ret)
- return ret;
-
- filter->type = RTE_ETH_FILTER_ETHERTYPE;
-
- return ret;
-}
-
static int
i40e_flow_check_raw_item(const struct rte_flow_item *item,
const struct rte_flow_item_raw *raw_spec,
@@ -3877,13 +3687,6 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
switch (filter_ctx.type) {
- case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_ethertype_filter_set(pf, &filter_ctx.ethertype_filter, 1);
- if (ret)
- goto free_flow;
- flow->rule = TAILQ_LAST(&pf->ethertype.ethertype_list,
- i40e_ethertype_filter_list);
- break;
case RTE_ETH_FILTER_FDIR:
ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
if (ret)
@@ -3944,10 +3747,6 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
return 0;
switch (filter_type) {
- case RTE_ETH_FILTER_ETHERTYPE:
- ret = i40e_flow_destroy_ethertype_filter(pf,
- (struct i40e_ethertype_filter *)flow->rule);
- break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_flow_destroy_tunnel_filter(pf,
(struct i40e_tunnel_filter *)flow->rule);
@@ -3987,41 +3786,6 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
return ret;
}
-static int
-i40e_flow_destroy_ethertype_filter(struct i40e_pf *pf,
- struct i40e_ethertype_filter *filter)
-{
- struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- struct i40e_ethertype_rule *ethertype_rule = &pf->ethertype;
- struct i40e_ethertype_filter *node;
- struct i40e_control_filter_stats stats;
- uint16_t flags = 0;
- int ret = 0;
-
- if (!(filter->flags & RTE_ETHTYPE_FLAGS_MAC))
- flags |= I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC;
- if (filter->flags & RTE_ETHTYPE_FLAGS_DROP)
- flags |= I40E_AQC_ADD_CONTROL_PACKET_FLAGS_DROP;
- flags |= I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE;
-
- memset(&stats, 0, sizeof(stats));
- ret = i40e_aq_add_rem_control_packet_filter(hw,
- filter->input.mac_addr.addr_bytes,
- filter->input.ether_type,
- flags, pf->main_vsi->seid,
- filter->queue, 0, &stats, NULL);
- if (ret < 0)
- return ret;
-
- node = i40e_sw_ethertype_filter_lookup(ethertype_rule, &filter->input);
- if (!node)
- return -EINVAL;
-
- ret = i40e_sw_ethertype_filter_del(pf, &node->input);
-
- return ret;
-}
-
static int
i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
struct i40e_tunnel_filter *filter)
@@ -4100,14 +3864,6 @@ i40e_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
return -rte_errno;
}
- ret = i40e_flow_flush_ethertype_filter(pf);
- if (ret) {
- rte_flow_error_set(error, -ret,
- RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Failed to ethertype flush flows.");
- return -rte_errno;
- }
-
ret = i40e_flow_flush_tunnel_filter(pf);
if (ret) {
rte_flow_error_set(error, -ret,
@@ -4184,34 +3940,6 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
return ret;
}
-/* Flush all ethertype filters */
-static int
-i40e_flow_flush_ethertype_filter(struct i40e_pf *pf)
-{
- struct i40e_ethertype_filter_list
- *ethertype_list = &pf->ethertype.ethertype_list;
- struct i40e_ethertype_filter *filter;
- struct rte_flow *flow;
- void *temp;
- int ret = 0;
-
- while ((filter = TAILQ_FIRST(ethertype_list))) {
- ret = i40e_flow_destroy_ethertype_filter(pf, filter);
- if (ret)
- return ret;
- }
-
- /* Delete ethertype flows in flow list. */
- RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
- if (flow->filter_type == RTE_ETH_FILTER_ETHERTYPE) {
- TAILQ_REMOVE(&pf->flow_list, flow, node);
- rte_free(flow);
- }
- }
-
- return ret;
-}
-
/* Flush all tunnel filters */
static int
i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
diff --git a/drivers/net/intel/i40e/i40e_flow.h b/drivers/net/intel/i40e/i40e_flow.h
index c958868661..d6efd95216 100644
--- a/drivers/net/intel/i40e/i40e_flow.h
+++ b/drivers/net/intel/i40e/i40e_flow.h
@@ -7,6 +7,14 @@
#include "../common/flow_engine.h"
+int i40e_get_outer_vlan(struct rte_eth_dev *dev, uint16_t *tpid);
+
+enum i40e_flow_engine_type {
+ I40E_FLOW_ENGINE_TYPE_ETHERTYPE = 0,
+};
+
extern const struct ci_flow_engine_list i40e_flow_engine_list;
+extern const struct ci_flow_engine i40e_flow_engine_ethertype;
+
#endif /* _I40E_FLOW_H_ */
diff --git a/drivers/net/intel/i40e/i40e_flow_ethertype.c b/drivers/net/intel/i40e/i40e_flow_ethertype.c
new file mode 100644
index 0000000000..2a2e03a764
--- /dev/null
+++ b/drivers/net/intel/i40e/i40e_flow_ethertype.c
@@ -0,0 +1,258 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include "i40e_ethdev.h"
+#include "i40e_flow.h"
+
+#include "../common/flow_engine.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+
+struct i40e_ethertype_ctx {
+ struct ci_flow_engine_ctx base;
+ struct rte_eth_ethertype_filter ethertype;
+};
+
+struct i40e_ethertype_flow {
+ struct rte_flow base;
+ struct rte_eth_ethertype_filter ethertype;
+};
+
+#define I40E_IPV6_FRAG_HEADER 44
+#define I40E_IPV6_TC_MASK (0xFF << I40E_FDIR_IPv6_TC_OFFSET)
+#define I40E_VLAN_TCI_MASK 0xFFFF
+#define I40E_VLAN_PRI_MASK 0xE000
+#define I40E_VLAN_CFI_MASK 0x1000
+#define I40E_VLAN_VID_MASK 0x0FFF
+
+/**
+ * Ethertype filter graph implementation
+ * Pattern: START -> ETH -> END
+ */
+
+enum i40e_ethertype_node_id {
+ I40E_ETHERTYPE_NODE_START = RTE_FLOW_NODE_FIRST,
+ I40E_ETHERTYPE_NODE_ETH,
+ I40E_ETHERTYPE_NODE_END,
+ I40E_ETHERTYPE_NODE_MAX,
+};
+
+static int
+i40e_ethertype_node_eth_validate(const void *ctx __rte_unused,
+ const struct rte_flow_item *item, struct rte_flow_error *error)
+{
+ const struct rte_flow_item_eth *eth_spec = item->spec;
+ const struct rte_flow_item_eth *eth_mask = item->mask;
+ uint16_t ether_type;
+
+ /* Source MAC mask must be all zeros */
+ if (!CI_FIELD_IS_ZERO(ð_mask->hdr.src_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Source MAC filtering not supported");
+ }
+
+ /* Dest MAC mask must be all zeros or all ones */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(ð_mask->hdr.dst_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Dest MAC filtering not supported");
+ }
+
+ /* Ethertype mask must be exact match */
+ if (!CI_FIELD_IS_MASKED(ð_mask->hdr.ether_type)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Ethertype must be exactly matched");
+ }
+
+ /* Check for valid ethertype (not IPv4/IPv6/LLDP) */
+ ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+ if (ether_type == RTE_ETHER_TYPE_IPV4 ||
+ ether_type == RTE_ETHER_TYPE_IPV6 ||
+ ether_type == RTE_ETHER_TYPE_VLAN ||
+ ether_type == RTE_ETHER_TYPE_LLDP) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "IPv4/IPv6/LLDP not supported by ethertype filter");
+ }
+
+ return 0;
+}
+
+static int
+i40e_ethertype_node_eth_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_ethertype_ctx *ethertype_ctx = ctx;
+ struct rte_eth_ethertype_filter *filter = ðertype_ctx->ethertype;
+ const struct rte_flow_item_eth *eth_spec = item->spec;
+ const struct rte_flow_item_eth *eth_mask = item->mask;
+ int ret;
+ uint16_t tpid, ether_type;
+
+ ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+
+ if (CI_FIELD_IS_MASKED(ð_mask->hdr.dst_addr)) {
+ filter->mac_addr = eth_spec->hdr.dst_addr;
+ filter->flags |= RTE_ETHTYPE_FLAGS_MAC;
+ }
+
+ ret = i40e_get_outer_vlan(ethertype_ctx->base.dev, &tpid);
+ if (ret != 0) {
+ return rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Can not get the Ethertype identifying the L2 tag");
+ }
+ if (ether_type == tpid) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Unsupported ether_type in control packet filter.");
+ }
+ filter->ether_type = ether_type;
+
+ return 0;
+}
+
+const struct rte_flow_graph i40e_ethertype_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [I40E_ETHERTYPE_NODE_START] = {
+ .name = "START",
+ },
+ [I40E_ETHERTYPE_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_ethertype_node_eth_validate,
+ .process = i40e_ethertype_node_eth_process,
+ },
+ [I40E_ETHERTYPE_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [I40E_ETHERTYPE_NODE_START] = {
+ .next = (const size_t[]) {
+ I40E_ETHERTYPE_NODE_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_ETHERTYPE_NODE_ETH] = {
+ .next = (const size_t[]) {
+ I40E_ETHERTYPE_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+static int
+i40e_flow_ethertype_ctx_parse(const struct rte_flow_action *actions,
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct i40e_ethertype_ctx *ethertype_ctx = (struct i40e_ethertype_ctx *)ctx;
+ struct rte_eth_dev *dev = ethertype_ctx->base.dev;
+ struct ci_flow_actions parsed_actions = {0};
+ struct ci_flow_actions_check_param ac_param = {
+ .allowed_types = (enum rte_flow_action_type[]) {
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_DROP,
+ RTE_FLOW_ACTION_TYPE_END,
+ },
+ .max_actions = 1,
+ };
+ const struct rte_flow_action *action;
+ int ret;
+
+ ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error);
+ if (ret)
+ return ret;
+
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret)
+ return ret;
+
+ action = parsed_actions.actions[0];
+
+ if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+ const struct rte_flow_action_queue *act_q = action->conf;
+ /* check queue index */
+ if (act_q->index >= dev->data->nb_rx_queues) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "Invalid queue index");
+ }
+ ethertype_ctx->ethertype.queue = act_q->index;
+ } else if (action->type == RTE_FLOW_ACTION_TYPE_DROP) {
+ ethertype_ctx->ethertype.flags |= RTE_ETHTYPE_FLAGS_DROP;
+ }
+ return 0;
+}
+
+static int
+i40e_flow_ethertype_ctx_to_flow(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct i40e_ethertype_ctx *ethertype_ctx = (const struct i40e_ethertype_ctx *)ctx;
+ struct i40e_ethertype_flow *ethertype_flow = (struct i40e_ethertype_flow *)flow;
+
+ /* copy ethertype filter configuration to flow */
+ ethertype_flow->ethertype = ethertype_ctx->ethertype;
+
+ return 0;
+}
+
+static int
+i40e_flow_ethertype_install(struct ci_flow *flow, struct rte_flow_error *error)
+{
+ struct i40e_ethertype_flow *ethertype_flow = (struct i40e_ethertype_flow *)flow;
+ struct rte_eth_dev *dev = flow->dev;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ int ret;
+
+ ret = i40e_ethertype_filter_set(pf, ðertype_flow->ethertype, true);
+ if (ret) {
+ return rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_HANDLE, flow,
+ "Failed to install ethertype filter");
+ }
+ return 0;
+}
+
+static int
+i40e_flow_ethertype_uninstall(struct ci_flow *flow, struct rte_flow_error *error)
+{
+ struct i40e_ethertype_flow *ethertype_flow = (struct i40e_ethertype_flow *)flow;
+ struct rte_eth_dev *dev = flow->dev;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ int ret;
+
+ ret = i40e_ethertype_filter_set(pf, ðertype_flow->ethertype, false);
+ if (ret) {
+ return rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_HANDLE, flow,
+ "Failed to install ethertype filter");
+ }
+ return 0;
+}
+
+const struct ci_flow_engine_ops i40e_flow_engine_ethertype_ops = {
+ .ctx_parse = i40e_flow_ethertype_ctx_parse,
+ .ctx_to_flow = i40e_flow_ethertype_ctx_to_flow,
+ .flow_install = i40e_flow_ethertype_install,
+ .flow_uninstall = i40e_flow_ethertype_uninstall,
+};
+
+const struct ci_flow_engine i40e_flow_engine_ethertype = {
+ .name = "i40e_ethertype",
+ .ctx_size = sizeof(struct i40e_ethertype_ctx),
+ .flow_size = sizeof(struct i40e_ethertype_flow),
+ .type = I40E_FLOW_ENGINE_TYPE_ETHERTYPE,
+ .ops = &i40e_flow_engine_ethertype_ops,
+ .graph = &i40e_ethertype_graph,
+};
diff --git a/drivers/net/intel/i40e/meson.build b/drivers/net/intel/i40e/meson.build
index bccae1ffc1..bff0518fc9 100644
--- a/drivers/net/intel/i40e/meson.build
+++ b/drivers/net/intel/i40e/meson.build
@@ -25,6 +25,7 @@ sources += files(
'i40e_pf.c',
'i40e_fdir.c',
'i40e_flow.c',
+ 'i40e_flow_ethertype.c',
'i40e_tm.c',
'i40e_hash.c',
'i40e_vf_representor.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 14/21] net/i40e: reimplement FDIR parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (12 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 13/21] net/i40e: reimplement ethertype parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 15/21] net/i40e: reimplement tunnel QinQ parser Anatoly Burakov
` (7 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Use the new flow graph API and the common parsing framework to implement
flow parser for flow director.
As a result of transitioning to more formalized validation, some checks
have become more stringent. In particular, some protocols (such as SCTP)
were previously accepting non-zero or invalid masks and either ignoring
them or misinterpreting them to mean "fully masked". This has now been
corrected.
The FDIR engine in i40e has also relied on a custom memory allocation
scheme for fdir flows - also migrated to the new infrastructure.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.c | 48 -
drivers/net/intel/i40e/i40e_ethdev.h | 25 +-
drivers/net/intel/i40e/i40e_fdir.c | 47 -
drivers/net/intel/i40e/i40e_flow.c | 1965 +----------------------
drivers/net/intel/i40e/i40e_flow.h | 6 +
drivers/net/intel/i40e/i40e_flow_fdir.c | 1806 +++++++++++++++++++++
drivers/net/intel/i40e/meson.build | 1 +
7 files changed, 1824 insertions(+), 2074 deletions(-)
create mode 100644 drivers/net/intel/i40e/i40e_flow_fdir.c
diff --git a/drivers/net/intel/i40e/i40e_ethdev.c b/drivers/net/intel/i40e/i40e_ethdev.c
index b71a4fb0d1..d4b5d36465 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.c
+++ b/drivers/net/intel/i40e/i40e_ethdev.c
@@ -1104,10 +1104,6 @@ i40e_init_fdir_filter_list(struct rte_eth_dev *dev)
uint32_t alloc = hw->func_caps.fd_filters_guaranteed;
uint32_t best = hw->func_caps.fd_filters_best_effort;
enum i40e_filter_pctype pctype;
- struct rte_bitmap *bmp = NULL;
- uint32_t bmp_size;
- void *mem = NULL;
- uint32_t i = 0;
int ret;
struct rte_hash_parameters fdir_hash_params = {
@@ -1164,50 +1160,8 @@ i40e_init_fdir_filter_list(struct rte_eth_dev *dev)
PMD_DRV_LOG(INFO, "FDIR guarantee space: %u, best_effort space %u.", alloc, best);
- fdir_info->fdir_flow_pool.pool =
- rte_zmalloc("i40e_fdir_entry",
- sizeof(struct i40e_fdir_entry) *
- fdir_info->fdir_space_size,
- 0);
-
- if (!fdir_info->fdir_flow_pool.pool) {
- PMD_INIT_LOG(ERR,
- "Failed to allocate memory for bitmap flow!");
- ret = -ENOMEM;
- goto err_fdir_bitmap_flow_alloc;
- }
-
- for (i = 0; i < fdir_info->fdir_space_size; i++)
- fdir_info->fdir_flow_pool.pool[i].idx = i;
-
- bmp_size =
- rte_bitmap_get_memory_footprint(fdir_info->fdir_space_size);
- mem = rte_zmalloc("fdir_bmap", bmp_size, RTE_CACHE_LINE_SIZE);
- if (mem == NULL) {
- PMD_INIT_LOG(ERR,
- "Failed to allocate memory for fdir bitmap!");
- ret = -ENOMEM;
- goto err_fdir_mem_alloc;
- }
- bmp = rte_bitmap_init(fdir_info->fdir_space_size, mem, bmp_size);
- if (bmp == NULL) {
- PMD_INIT_LOG(ERR,
- "Failed to initialization fdir bitmap!");
- ret = -ENOMEM;
- goto err_fdir_bmp_alloc;
- }
- for (i = 0; i < fdir_info->fdir_space_size; i++)
- rte_bitmap_set(bmp, i);
-
- fdir_info->fdir_flow_pool.bitmap = bmp;
-
return 0;
-err_fdir_bmp_alloc:
- rte_free(mem);
-err_fdir_mem_alloc:
- rte_free(fdir_info->fdir_flow_pool.pool);
-err_fdir_bitmap_flow_alloc:
rte_free(fdir_info->fdir_filter_array);
err_fdir_filter_array_alloc:
rte_free(fdir_info->hash_map);
@@ -1940,8 +1894,6 @@ i40e_fdir_memory_cleanup(struct i40e_pf *pf)
/* flow director memory cleanup */
rte_free(fdir_info->hash_map);
rte_hash_free(fdir_info->hash_table);
- rte_free(fdir_info->fdir_flow_pool.bitmap);
- rte_free(fdir_info->fdir_flow_pool.pool);
rte_free(fdir_info->fdir_filter_array);
}
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 118ba8a6c7..7c4786bec0 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -718,28 +718,12 @@ struct i40e_fdir_filter {
struct i40e_fdir_filter_conf fdir;
};
-/* fdir memory pool entry */
-struct i40e_fdir_entry {
- struct rte_flow flow;
- uint32_t idx;
-};
-
-/* pre-allocated fdir memory pool */
-struct i40e_fdir_flow_pool {
- /* a bitmap to manage the fdir pool */
- struct rte_bitmap *bitmap;
- /* the size the pool is pf->fdir->fdir_space_size */
- struct i40e_fdir_entry *pool;
-};
-
-#define FLOW_TO_FLOW_BITMAP(f) \
- container_of((f), struct i40e_fdir_entry, flow)
-
TAILQ_HEAD(i40e_fdir_filter_list, i40e_fdir_filter);
/*
* A structure used to define fields of a FDIR related info.
*/
struct i40e_fdir_info {
+ uint64_t num_fdir_flows;
struct i40e_vsi *fdir_vsi; /* pointer to fdir VSI structure */
uint16_t match_counter_index; /* Statistic counter index used for fdir*/
struct ci_tx_queue *txq;
@@ -790,8 +774,6 @@ struct i40e_fdir_info {
uint32_t fdir_guarantee_free_space;
/* the fdir total guaranteed space */
uint32_t fdir_guarantee_total_space;
- /* the pre-allocated pool of the rte_flow */
- struct i40e_fdir_flow_pool fdir_flow_pool;
/* Mark if flex pit and mask is set */
bool flex_pit_flag[I40E_MAX_FLXPLD_LAYER];
@@ -1311,7 +1293,6 @@ extern const struct rte_flow_ops i40e_flow_ops;
struct i40e_filter_ctx {
union {
- struct i40e_fdir_filter_conf fdir_filter;
struct i40e_tunnel_filter_conf consistent_tunnel_filter;
struct i40e_rte_flow_rss_conf rss_conf;
};
@@ -1408,10 +1389,6 @@ uint64_t i40e_get_default_input_set(uint16_t pctype);
int i40e_ethertype_filter_set(struct i40e_pf *pf,
struct rte_eth_ethertype_filter *filter,
bool add);
-struct rte_flow *
-i40e_fdir_entry_pool_get(struct i40e_fdir_info *fdir_info);
-void i40e_fdir_entry_pool_put(struct i40e_fdir_info *fdir_info,
- struct rte_flow *flow);
int i40e_flow_add_del_fdir_filter(struct rte_eth_dev *dev,
const struct i40e_fdir_filter_conf *filter,
bool add);
diff --git a/drivers/net/intel/i40e/i40e_fdir.c b/drivers/net/intel/i40e/i40e_fdir.c
index 3b099d5a9e..8f72801206 100644
--- a/drivers/net/intel/i40e/i40e_fdir.c
+++ b/drivers/net/intel/i40e/i40e_fdir.c
@@ -1093,53 +1093,6 @@ i40e_sw_fdir_filter_del(struct i40e_pf *pf, struct i40e_fdir_input *input)
return 0;
}
-struct rte_flow *
-i40e_fdir_entry_pool_get(struct i40e_fdir_info *fdir_info)
-{
- struct rte_flow *flow = NULL;
- uint64_t slab = 0;
- uint32_t pos = 0;
- uint32_t i = 0;
- int ret;
-
- if (fdir_info->fdir_actual_cnt >=
- fdir_info->fdir_space_size) {
- PMD_DRV_LOG(ERR, "Fdir space full");
- return NULL;
- }
-
- ret = rte_bitmap_scan(fdir_info->fdir_flow_pool.bitmap, &pos,
- &slab);
-
- /* normally this won't happen as the fdir_actual_cnt should be
- * same with the number of the set bits in fdir_flow_pool,
- * but anyway handle this error condition here for safe
- */
- if (ret == 0) {
- PMD_DRV_LOG(ERR, "fdir_actual_cnt out of sync");
- return NULL;
- }
-
- i = rte_bsf64(slab);
- pos += i;
- rte_bitmap_clear(fdir_info->fdir_flow_pool.bitmap, pos);
- flow = &fdir_info->fdir_flow_pool.pool[pos].flow;
-
- memset(flow, 0, sizeof(struct rte_flow));
-
- return flow;
-}
-
-void
-i40e_fdir_entry_pool_put(struct i40e_fdir_info *fdir_info,
- struct rte_flow *flow)
-{
- struct i40e_fdir_entry *f;
-
- f = FLOW_TO_FLOW_BITMAP(flow);
- rte_bitmap_set(fdir_info->fdir_flow_pool.bitmap, f->idx);
-}
-
static int
i40e_flow_store_flex_pit(struct i40e_pf *pf,
struct i40e_fdir_flex_pit *flex_pit,
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 68155a58b4..44dcb4f5b2 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -32,12 +32,10 @@
const struct ci_flow_engine_list i40e_flow_engine_list = {
{
&i40e_flow_engine_ethertype,
+ &i40e_flow_engine_fdir,
}
};
-#define I40E_IPV6_TC_MASK (0xFF << I40E_FDIR_IPv6_TC_OFFSET)
-#define I40E_IPV6_FRAG_HEADER 44
-#define I40E_TENANT_ARRAY_NUM 3
#define I40E_VLAN_TCI_MASK 0xFFFF
#define I40E_VLAN_PRI_MASK 0xE000
#define I40E_VLAN_CFI_MASK 0x1000
@@ -62,23 +60,10 @@ static int i40e_flow_query(struct rte_eth_dev *dev,
struct rte_flow *flow,
const struct rte_flow_action *actions,
void *data, struct rte_flow_error *error);
-static int i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
- const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct i40e_fdir_filter_conf *filter);
-static int i40e_flow_parse_fdir_action(struct rte_eth_dev *dev,
- const struct rte_flow_action *actions,
- struct rte_flow_error *error,
- struct i40e_fdir_filter_conf *filter);
static int i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
struct i40e_tunnel_filter_conf *filter);
-static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter);
static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
@@ -101,7 +86,6 @@ static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
struct i40e_tunnel_filter *filter);
-static int i40e_flow_flush_fdir_filter(struct i40e_pf *pf);
static int i40e_flow_flush_tunnel_filter(struct i40e_pf *pf);
static int
i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
@@ -128,19 +112,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-/* Pattern matched ethertype filter */
-static enum rte_flow_item_type pattern_ethertype[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-/* Pattern matched flow director filter */
-static enum rte_flow_item_type pattern_fdir_ipv4[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
static enum rte_flow_item_type pattern_fdir_ipv4_udp[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_IPV4,
@@ -178,30 +149,6 @@ static enum rte_flow_item_type pattern_fdir_ipv4_gtpu[] = {
RTE_FLOW_ITEM_TYPE_END,
};
-static enum rte_flow_item_type pattern_fdir_ipv4_gtpu_ipv4[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_GTPU,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_gtpu_ipv6[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_GTPU,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
static enum rte_flow_item_type pattern_fdir_ipv6_udp[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_IPV6,
@@ -239,581 +186,6 @@ static enum rte_flow_item_type pattern_fdir_ipv6_gtpu[] = {
RTE_FLOW_ITEM_TYPE_END,
};
-static enum rte_flow_item_type pattern_fdir_ipv6_gtpu_ipv4[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_GTPU,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_gtpu_ipv6[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_GTPU,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ethertype_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ethertype_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ethertype_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_udp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_tcp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_sctp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_udp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_tcp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_sctp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ethertype_vlan[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ethertype_vlan_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_udp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_tcp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv4_sctp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_udp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_tcp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_vlan_ipv6_sctp_raw_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_RAW,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
/* Pattern matched tunnel filter */
static enum rte_flow_item_type pattern_vxlan_1[] = {
RTE_FLOW_ITEM_TYPE_ETH,
@@ -926,138 +298,7 @@ static enum rte_flow_item_type pattern_qinq_1[] = {
RTE_FLOW_ITEM_TYPE_END,
};
-static enum rte_flow_item_type pattern_fdir_ipv4_l2tpv3oip[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_L2TPV3OIP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_l2tpv3oip[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_L2TPV3OIP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_esp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_ESP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_esp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_ESP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_udp_esp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_ESP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_udp_esp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_ESP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
static struct i40e_valid_pattern i40e_supported_patterns[] = {
- /* FDIR - support default flow type without flexible payload*/
- { pattern_ethertype, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_udp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_tcp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_sctp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_gtpc, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_gtpu, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_gtpu_ipv4, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_gtpu_ipv6, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_esp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_udp_esp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_udp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_tcp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_sctp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_gtpc, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_gtpu, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_gtpu_ipv4, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_gtpu_ipv6, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_esp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_udp_esp, i40e_flow_parse_fdir_filter },
- /* FDIR - support default flow type with flexible payload */
- { pattern_fdir_ethertype_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ethertype_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ethertype_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_udp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_udp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_udp_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_tcp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_tcp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_tcp_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_sctp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_sctp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv4_sctp_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_udp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_udp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_udp_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_tcp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_tcp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_tcp_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_sctp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_sctp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_sctp_raw_3, i40e_flow_parse_fdir_filter },
- /* FDIR - support single vlan input set */
- { pattern_fdir_ethertype_vlan, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_udp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_tcp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_sctp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_udp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_tcp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_sctp, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ethertype_vlan_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ethertype_vlan_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ethertype_vlan_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_udp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_udp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_udp_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_tcp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_tcp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_tcp_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_sctp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_sctp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv4_sctp_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_udp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_udp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_udp_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_tcp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_tcp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_tcp_raw_3, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_sctp_raw_1, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_sctp_raw_2, i40e_flow_parse_fdir_filter },
- { pattern_fdir_vlan_ipv6_sctp_raw_3, i40e_flow_parse_fdir_filter },
/* VXLAN */
{ pattern_vxlan_1, i40e_flow_parse_vxlan_filter },
{ pattern_vxlan_2, i40e_flow_parse_vxlan_filter },
@@ -1080,9 +321,6 @@ static struct i40e_valid_pattern i40e_supported_patterns[] = {
{ pattern_fdir_ipv6_gtpu, i40e_flow_parse_gtp_filter },
/* QINQ */
{ pattern_qinq_1, i40e_flow_parse_qinq_filter },
- /* L2TPv3 over IP */
- { pattern_fdir_ipv4_l2tpv3oip, i40e_flow_parse_fdir_filter },
- { pattern_fdir_ipv6_l2tpv3oip, i40e_flow_parse_fdir_filter },
/* L4 over port */
{ pattern_fdir_ipv4_udp, i40e_flow_parse_l4_cloud_filter },
{ pattern_fdir_ipv4_tcp, i40e_flow_parse_l4_cloud_filter },
@@ -1209,47 +447,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev, uint16_t *tpid)
return 0;
}
-static int
-i40e_flow_check_raw_item(const struct rte_flow_item *item,
- const struct rte_flow_item_raw *raw_spec,
- struct rte_flow_error *error)
-{
- if (!raw_spec->relative) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Relative should be 1.");
- return -rte_errno;
- }
-
- if (raw_spec->offset % sizeof(uint16_t)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Offset should be even.");
- return -rte_errno;
- }
-
- if (raw_spec->search || raw_spec->limit) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "search or limit is not supported.");
- return -rte_errno;
- }
-
- if (raw_spec->offset < 0) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Offset should be non-negative.");
- return -rte_errno;
- }
- return 0;
-}
-
-
-static uint8_t
+uint8_t
i40e_flow_fdir_get_pctype_value(struct i40e_pf *pf,
enum rte_flow_item_type item_type,
struct i40e_fdir_filter_conf *filter)
@@ -1316,1023 +514,6 @@ i40e_flow_fdir_get_pctype_value(struct i40e_pf *pf,
return I40E_FILTER_PCTYPE_INVALID;
}
-static void
-i40e_flow_set_filter_spi(struct i40e_fdir_filter_conf *filter,
- const struct rte_flow_item_esp *esp_spec)
-{
- if (filter->input.flow_ext.oip_type ==
- I40E_FDIR_IPTYPE_IPV4) {
- if (filter->input.flow_ext.is_udp)
- filter->input.flow.esp_ipv4_udp_flow.spi =
- esp_spec->hdr.spi;
- else
- filter->input.flow.esp_ipv4_flow.spi =
- esp_spec->hdr.spi;
- }
- if (filter->input.flow_ext.oip_type ==
- I40E_FDIR_IPTYPE_IPV6) {
- if (filter->input.flow_ext.is_udp)
- filter->input.flow.esp_ipv6_udp_flow.spi =
- esp_spec->hdr.spi;
- else
- filter->input.flow.esp_ipv6_flow.spi =
- esp_spec->hdr.spi;
- }
-}
-
-/* 1. Last in item should be NULL as range is not supported.
- * 2. Supported patterns: refer to array i40e_supported_patterns.
- * 3. Default supported flow type and input set: refer to array
- * valid_fdir_inset_table in i40e_ethdev.c.
- * 4. Mask of fields which need to be matched should be
- * filled with 1.
- * 5. Mask of fields which needn't to be matched should be
- * filled with 0.
- * 6. GTP profile supports GTPv1 only.
- * 7. GTP-C response message ('source_port' = 2123) is not supported.
- */
-static int
-i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
- const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct i40e_fdir_filter_conf *filter)
-{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- const struct rte_flow_item *item = pattern;
- const struct rte_flow_item_eth *eth_spec, *eth_mask;
- const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
- const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_last, *ipv4_mask;
- const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
- const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
- const struct rte_flow_item_udp *udp_spec, *udp_mask;
- const struct rte_flow_item_sctp *sctp_spec, *sctp_mask;
- const struct rte_flow_item_gtp *gtp_spec, *gtp_mask;
- const struct rte_flow_item_esp *esp_spec, *esp_mask;
- const struct rte_flow_item_raw *raw_spec, *raw_mask;
- const struct rte_flow_item_l2tpv3oip *l2tpv3oip_spec, *l2tpv3oip_mask;
-
- uint8_t pctype = 0;
- uint64_t input_set = I40E_INSET_NONE;
- enum rte_flow_item_type item_type;
- enum rte_flow_item_type next_type;
- enum rte_flow_item_type l3 = RTE_FLOW_ITEM_TYPE_END;
- enum rte_flow_item_type cus_proto = RTE_FLOW_ITEM_TYPE_END;
- uint32_t i, j;
- uint8_t ipv6_addr_mask[16] = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
- enum i40e_flxpld_layer_idx layer_idx = I40E_FLXPLD_L2_IDX;
- uint8_t raw_id = 0;
- int32_t off_arr[I40E_MAX_FLXPLD_FIED];
- uint16_t len_arr[I40E_MAX_FLXPLD_FIED];
- struct i40e_fdir_flex_pit flex_pit;
- uint8_t next_dst_off = 0;
- uint16_t flex_size;
- uint16_t ether_type;
- uint32_t vtc_flow_cpu;
- bool outer_ip = true;
- uint8_t field_idx;
- int ret;
- uint16_t tpid;
-
- memset(off_arr, 0, sizeof(off_arr));
- memset(len_arr, 0, sizeof(len_arr));
- filter->input.flow_ext.customized_pctype = false;
- for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last && item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Not support range");
- return -rte_errno;
- }
- item_type = item->type;
- switch (item_type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- eth_spec = item->spec;
- eth_mask = item->mask;
- next_type = (item + 1)->type;
-
- if (next_type == RTE_FLOW_ITEM_TYPE_END &&
- (!eth_spec || !eth_mask)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "NULL eth spec/mask.");
- return -rte_errno;
- }
-
- if (eth_spec && eth_mask) {
- if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr) &&
- rte_is_zero_ether_addr(ð_mask->hdr.src_addr)) {
- filter->input.flow.l2_flow.dst =
- eth_spec->hdr.dst_addr;
- input_set |= I40E_INSET_DMAC;
- } else if (rte_is_zero_ether_addr(ð_mask->hdr.dst_addr) &&
- rte_is_broadcast_ether_addr(ð_mask->hdr.src_addr)) {
- filter->input.flow.l2_flow.src =
- eth_spec->hdr.src_addr;
- input_set |= I40E_INSET_SMAC;
- } else if (rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr) &&
- rte_is_broadcast_ether_addr(ð_mask->hdr.src_addr)) {
- filter->input.flow.l2_flow.dst =
- eth_spec->hdr.dst_addr;
- filter->input.flow.l2_flow.src =
- eth_spec->hdr.src_addr;
- input_set |= (I40E_INSET_DMAC | I40E_INSET_SMAC);
- } else if (!rte_is_zero_ether_addr(ð_mask->hdr.src_addr) ||
- !rte_is_zero_ether_addr(ð_mask->hdr.dst_addr)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid MAC_addr mask.");
- return -rte_errno;
- }
- }
- if (eth_spec && eth_mask &&
- next_type == RTE_FLOW_ITEM_TYPE_END) {
- if (eth_mask->hdr.ether_type != RTE_BE16(0xffff)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid type mask.");
- return -rte_errno;
- }
-
- ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
-
- if (ether_type == RTE_ETHER_TYPE_IPV4 ||
- ether_type == RTE_ETHER_TYPE_IPV6) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported ether_type.");
- return -rte_errno;
- }
- ret = i40e_get_outer_vlan(dev, &tpid);
- if (ret != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Can not get the Ethertype identifying the L2 tag");
- return -rte_errno;
- }
- if (ether_type == tpid) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported ether_type.");
- return -rte_errno;
- }
-
- input_set |= I40E_INSET_LAST_ETHER_TYPE;
- filter->input.flow.l2_flow.ether_type =
- eth_spec->hdr.ether_type;
- }
-
- pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
- layer_idx = I40E_FLXPLD_L2_IDX;
-
- break;
- case RTE_FLOW_ITEM_TYPE_VLAN:
- vlan_spec = item->spec;
- vlan_mask = item->mask;
-
- RTE_ASSERT(!(input_set & I40E_INSET_LAST_ETHER_TYPE));
- if (vlan_spec && vlan_mask) {
- if (vlan_mask->hdr.vlan_tci !=
- rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) &&
- vlan_mask->hdr.vlan_tci !=
- rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) &&
- vlan_mask->hdr.vlan_tci !=
- rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) &&
- vlan_mask->hdr.vlan_tci !=
- rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported TCI mask.");
- }
- input_set |= I40E_INSET_VLAN_INNER;
- filter->input.flow_ext.vlan_tci =
- vlan_spec->hdr.vlan_tci;
- }
- if (vlan_spec && vlan_mask && vlan_mask->hdr.eth_proto) {
- if (vlan_mask->hdr.eth_proto != RTE_BE16(0xffff)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid inner_type"
- " mask.");
- return -rte_errno;
- }
-
- ether_type =
- rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
-
- if (ether_type == RTE_ETHER_TYPE_IPV4 ||
- ether_type == RTE_ETHER_TYPE_IPV6) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported inner_type.");
- return -rte_errno;
- }
- ret = i40e_get_outer_vlan(dev, &tpid);
- if (ret != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Can not get the Ethertype identifying the L2 tag");
- return -rte_errno;
- }
- if (ether_type == tpid) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported ether_type.");
- return -rte_errno;
- }
-
- input_set |= I40E_INSET_LAST_ETHER_TYPE;
- filter->input.flow.l2_flow.ether_type =
- vlan_spec->hdr.eth_proto;
- }
-
- pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
- layer_idx = I40E_FLXPLD_L2_IDX;
-
- break;
- case RTE_FLOW_ITEM_TYPE_IPV4:
- l3 = RTE_FLOW_ITEM_TYPE_IPV4;
- ipv4_spec = item->spec;
- ipv4_mask = item->mask;
- ipv4_last = item->last;
- pctype = I40E_FILTER_PCTYPE_NONF_IPV4_OTHER;
- layer_idx = I40E_FLXPLD_L3_IDX;
-
- if (ipv4_last) {
- if (!ipv4_spec || !ipv4_mask || !outer_ip) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Not support range");
- return -rte_errno;
- }
- /* Only fragment_offset supports range */
- if (ipv4_last->hdr.version_ihl ||
- ipv4_last->hdr.type_of_service ||
- ipv4_last->hdr.total_length ||
- ipv4_last->hdr.packet_id ||
- ipv4_last->hdr.time_to_live ||
- ipv4_last->hdr.next_proto_id ||
- ipv4_last->hdr.hdr_checksum ||
- ipv4_last->hdr.src_addr ||
- ipv4_last->hdr.dst_addr) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Not support range");
- return -rte_errno;
- }
- }
- if (ipv4_spec && ipv4_mask && outer_ip) {
- /* Check IPv4 mask and update input set */
- if (ipv4_mask->hdr.version_ihl ||
- ipv4_mask->hdr.total_length ||
- ipv4_mask->hdr.packet_id ||
- ipv4_mask->hdr.hdr_checksum) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv4 mask.");
- return -rte_errno;
- }
-
- if (ipv4_mask->hdr.src_addr == UINT32_MAX)
- input_set |= I40E_INSET_IPV4_SRC;
- if (ipv4_mask->hdr.dst_addr == UINT32_MAX)
- input_set |= I40E_INSET_IPV4_DST;
- if (ipv4_mask->hdr.type_of_service == UINT8_MAX)
- input_set |= I40E_INSET_IPV4_TOS;
- if (ipv4_mask->hdr.time_to_live == UINT8_MAX)
- input_set |= I40E_INSET_IPV4_TTL;
- if (ipv4_mask->hdr.next_proto_id == UINT8_MAX)
- input_set |= I40E_INSET_IPV4_PROTO;
-
- /* Check if it is fragment. */
- uint16_t frag_mask =
- ipv4_mask->hdr.fragment_offset;
- uint16_t frag_spec =
- ipv4_spec->hdr.fragment_offset;
- uint16_t frag_last = 0;
- if (ipv4_last)
- frag_last =
- ipv4_last->hdr.fragment_offset;
- if (frag_mask) {
- frag_mask = rte_be_to_cpu_16(frag_mask);
- frag_spec = rte_be_to_cpu_16(frag_spec);
- frag_last = rte_be_to_cpu_16(frag_last);
- /* frag_off mask has to be 0x3fff */
- if (frag_mask !=
- (RTE_IPV4_HDR_OFFSET_MASK |
- RTE_IPV4_HDR_MF_FLAG)) {
- rte_flow_error_set(error,
- EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv4 fragment_offset mask");
- return -rte_errno;
- }
- /*
- * non-frag rule:
- * mask=0x3fff,spec=0
- * frag rule:
- * mask=0x3fff,spec=0x8,last=0x2000
- */
- if (frag_spec ==
- (1 << RTE_IPV4_HDR_FO_SHIFT) &&
- frag_last == RTE_IPV4_HDR_MF_FLAG) {
- pctype =
- I40E_FILTER_PCTYPE_FRAG_IPV4;
- } else if (frag_spec || frag_last) {
- rte_flow_error_set(error,
- EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv4 fragment_offset rule");
- return -rte_errno;
- }
- } else if (frag_spec || frag_last) {
- rte_flow_error_set(error,
- EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid fragment_offset");
- return -rte_errno;
- }
-
- if (input_set & (I40E_INSET_DMAC | I40E_INSET_SMAC)) {
- if (input_set & (I40E_INSET_IPV4_SRC |
- I40E_INSET_IPV4_DST | I40E_INSET_IPV4_TOS |
- I40E_INSET_IPV4_TTL | I40E_INSET_IPV4_PROTO)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "L2 and L3 input set are exclusive.");
- return -rte_errno;
- }
- } else {
- /* Get the filter info */
- filter->input.flow.ip4_flow.proto =
- ipv4_spec->hdr.next_proto_id;
- filter->input.flow.ip4_flow.tos =
- ipv4_spec->hdr.type_of_service;
- filter->input.flow.ip4_flow.ttl =
- ipv4_spec->hdr.time_to_live;
- filter->input.flow.ip4_flow.src_ip =
- ipv4_spec->hdr.src_addr;
- filter->input.flow.ip4_flow.dst_ip =
- ipv4_spec->hdr.dst_addr;
-
- filter->input.flow_ext.inner_ip = false;
- filter->input.flow_ext.oip_type =
- I40E_FDIR_IPTYPE_IPV4;
- }
- } else if (!ipv4_spec && !ipv4_mask && !outer_ip) {
- filter->input.flow_ext.inner_ip = true;
- filter->input.flow_ext.iip_type =
- I40E_FDIR_IPTYPE_IPV4;
- } else if (!ipv4_spec && !ipv4_mask && outer_ip) {
- filter->input.flow_ext.inner_ip = false;
- filter->input.flow_ext.oip_type =
- I40E_FDIR_IPTYPE_IPV4;
- } else if ((ipv4_spec || ipv4_mask) && !outer_ip) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid inner IPv4 mask.");
- return -rte_errno;
- }
-
- if (outer_ip)
- outer_ip = false;
-
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- l3 = RTE_FLOW_ITEM_TYPE_IPV6;
- ipv6_spec = item->spec;
- ipv6_mask = item->mask;
- pctype = I40E_FILTER_PCTYPE_NONF_IPV6_OTHER;
- layer_idx = I40E_FLXPLD_L3_IDX;
-
- if (ipv6_spec && ipv6_mask && outer_ip) {
- /* Check IPv6 mask and update input set */
- if (ipv6_mask->hdr.payload_len) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv6 mask");
- return -rte_errno;
- }
-
- if (!memcmp(&ipv6_mask->hdr.src_addr,
- ipv6_addr_mask,
- sizeof(ipv6_mask->hdr.src_addr)))
- input_set |= I40E_INSET_IPV6_SRC;
- if (!memcmp(&ipv6_mask->hdr.dst_addr,
- ipv6_addr_mask,
- sizeof(ipv6_mask->hdr.dst_addr)))
- input_set |= I40E_INSET_IPV6_DST;
-
- if ((ipv6_mask->hdr.vtc_flow &
- rte_cpu_to_be_32(I40E_IPV6_TC_MASK))
- == rte_cpu_to_be_32(I40E_IPV6_TC_MASK))
- input_set |= I40E_INSET_IPV6_TC;
- if (ipv6_mask->hdr.proto == UINT8_MAX)
- input_set |= I40E_INSET_IPV6_NEXT_HDR;
- if (ipv6_mask->hdr.hop_limits == UINT8_MAX)
- input_set |= I40E_INSET_IPV6_HOP_LIMIT;
-
- /* Get filter info */
- vtc_flow_cpu =
- rte_be_to_cpu_32(ipv6_spec->hdr.vtc_flow);
- filter->input.flow.ipv6_flow.tc =
- (uint8_t)(vtc_flow_cpu >>
- I40E_FDIR_IPv6_TC_OFFSET);
- filter->input.flow.ipv6_flow.proto =
- ipv6_spec->hdr.proto;
- filter->input.flow.ipv6_flow.hop_limits =
- ipv6_spec->hdr.hop_limits;
-
- filter->input.flow_ext.inner_ip = false;
- filter->input.flow_ext.oip_type =
- I40E_FDIR_IPTYPE_IPV6;
-
- rte_memcpy(filter->input.flow.ipv6_flow.src_ip,
- &ipv6_spec->hdr.src_addr, 16);
- rte_memcpy(filter->input.flow.ipv6_flow.dst_ip,
- &ipv6_spec->hdr.dst_addr, 16);
-
- /* Check if it is fragment. */
- if (ipv6_spec->hdr.proto ==
- I40E_IPV6_FRAG_HEADER)
- pctype = I40E_FILTER_PCTYPE_FRAG_IPV6;
- } else if (!ipv6_spec && !ipv6_mask && !outer_ip) {
- filter->input.flow_ext.inner_ip = true;
- filter->input.flow_ext.iip_type =
- I40E_FDIR_IPTYPE_IPV6;
- } else if (!ipv6_spec && !ipv6_mask && outer_ip) {
- filter->input.flow_ext.inner_ip = false;
- filter->input.flow_ext.oip_type =
- I40E_FDIR_IPTYPE_IPV6;
- } else if ((ipv6_spec || ipv6_mask) && !outer_ip) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid inner IPv6 mask");
- return -rte_errno;
- }
-
- if (outer_ip)
- outer_ip = false;
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- tcp_spec = item->spec;
- tcp_mask = item->mask;
-
- if (l3 == RTE_FLOW_ITEM_TYPE_IPV4)
- pctype =
- I40E_FILTER_PCTYPE_NONF_IPV4_TCP;
- else if (l3 == RTE_FLOW_ITEM_TYPE_IPV6)
- pctype =
- I40E_FILTER_PCTYPE_NONF_IPV6_TCP;
- if (tcp_spec && tcp_mask) {
- /* Check TCP mask and update input set */
- if (tcp_mask->hdr.sent_seq ||
- tcp_mask->hdr.recv_ack ||
- tcp_mask->hdr.data_off ||
- tcp_mask->hdr.tcp_flags ||
- tcp_mask->hdr.rx_win ||
- tcp_mask->hdr.cksum ||
- tcp_mask->hdr.tcp_urp) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid TCP mask");
- return -rte_errno;
- }
-
- if (tcp_mask->hdr.src_port == UINT16_MAX)
- input_set |= I40E_INSET_SRC_PORT;
- if (tcp_mask->hdr.dst_port == UINT16_MAX)
- input_set |= I40E_INSET_DST_PORT;
-
- if (input_set & (I40E_INSET_DMAC | I40E_INSET_SMAC)) {
- if (input_set &
- (I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "L2 and L4 input set are exclusive.");
- return -rte_errno;
- }
- } else {
- /* Get filter info */
- if (l3 == RTE_FLOW_ITEM_TYPE_IPV4) {
- filter->input.flow.tcp4_flow.src_port =
- tcp_spec->hdr.src_port;
- filter->input.flow.tcp4_flow.dst_port =
- tcp_spec->hdr.dst_port;
- } else if (l3 == RTE_FLOW_ITEM_TYPE_IPV6) {
- filter->input.flow.tcp6_flow.src_port =
- tcp_spec->hdr.src_port;
- filter->input.flow.tcp6_flow.dst_port =
- tcp_spec->hdr.dst_port;
- }
- }
- }
-
- layer_idx = I40E_FLXPLD_L4_IDX;
-
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- udp_spec = item->spec;
- udp_mask = item->mask;
-
- if (l3 == RTE_FLOW_ITEM_TYPE_IPV4)
- pctype =
- I40E_FILTER_PCTYPE_NONF_IPV4_UDP;
- else if (l3 == RTE_FLOW_ITEM_TYPE_IPV6)
- pctype =
- I40E_FILTER_PCTYPE_NONF_IPV6_UDP;
-
- if (udp_spec && udp_mask) {
- /* Check UDP mask and update input set*/
- if (udp_mask->hdr.dgram_len ||
- udp_mask->hdr.dgram_cksum) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid UDP mask");
- return -rte_errno;
- }
-
- if (udp_mask->hdr.src_port == UINT16_MAX)
- input_set |= I40E_INSET_SRC_PORT;
- if (udp_mask->hdr.dst_port == UINT16_MAX)
- input_set |= I40E_INSET_DST_PORT;
-
- if (input_set & (I40E_INSET_DMAC | I40E_INSET_SMAC)) {
- if (input_set &
- (I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "L2 and L4 input set are exclusive.");
- return -rte_errno;
- }
- } else {
- /* Get filter info */
- if (l3 == RTE_FLOW_ITEM_TYPE_IPV4) {
- filter->input.flow.udp4_flow.src_port =
- udp_spec->hdr.src_port;
- filter->input.flow.udp4_flow.dst_port =
- udp_spec->hdr.dst_port;
- } else if (l3 == RTE_FLOW_ITEM_TYPE_IPV6) {
- filter->input.flow.udp6_flow.src_port =
- udp_spec->hdr.src_port;
- filter->input.flow.udp6_flow.dst_port =
- udp_spec->hdr.dst_port;
- }
- }
- }
- filter->input.flow_ext.is_udp = true;
- layer_idx = I40E_FLXPLD_L4_IDX;
-
- break;
- case RTE_FLOW_ITEM_TYPE_GTPC:
- case RTE_FLOW_ITEM_TYPE_GTPU:
- if (!pf->gtp_support) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported protocol");
- return -rte_errno;
- }
-
- gtp_spec = item->spec;
- gtp_mask = item->mask;
-
- if (gtp_spec && gtp_mask) {
- if (gtp_mask->hdr.gtp_hdr_info ||
- gtp_mask->hdr.msg_type ||
- gtp_mask->hdr.plen ||
- gtp_mask->hdr.teid != UINT32_MAX) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid GTP mask");
- return -rte_errno;
- }
-
- filter->input.flow.gtp_flow.teid =
- gtp_spec->hdr.teid;
- filter->input.flow_ext.customized_pctype = true;
- cus_proto = item_type;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_ESP:
- if (!pf->esp_support) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported ESP protocol");
- return -rte_errno;
- }
-
- esp_spec = item->spec;
- esp_mask = item->mask;
-
- if (!esp_spec || !esp_mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ESP item");
- return -rte_errno;
- }
-
- if (esp_spec && esp_mask) {
- if (esp_mask->hdr.spi != UINT32_MAX) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ESP mask");
- return -rte_errno;
- }
- i40e_flow_set_filter_spi(filter, esp_spec);
- filter->input.flow_ext.customized_pctype = true;
- cus_proto = item_type;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- sctp_spec = item->spec;
- sctp_mask = item->mask;
-
- if (l3 == RTE_FLOW_ITEM_TYPE_IPV4)
- pctype =
- I40E_FILTER_PCTYPE_NONF_IPV4_SCTP;
- else if (l3 == RTE_FLOW_ITEM_TYPE_IPV6)
- pctype =
- I40E_FILTER_PCTYPE_NONF_IPV6_SCTP;
-
- if (sctp_spec && sctp_mask) {
- /* Check SCTP mask and update input set */
- if (sctp_mask->hdr.cksum) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid UDP mask");
- return -rte_errno;
- }
-
- if (sctp_mask->hdr.src_port == UINT16_MAX)
- input_set |= I40E_INSET_SRC_PORT;
- if (sctp_mask->hdr.dst_port == UINT16_MAX)
- input_set |= I40E_INSET_DST_PORT;
- if (sctp_mask->hdr.tag == UINT32_MAX)
- input_set |= I40E_INSET_SCTP_VT;
-
- /* Get filter info */
- if (l3 == RTE_FLOW_ITEM_TYPE_IPV4) {
- filter->input.flow.sctp4_flow.src_port =
- sctp_spec->hdr.src_port;
- filter->input.flow.sctp4_flow.dst_port =
- sctp_spec->hdr.dst_port;
- filter->input.flow.sctp4_flow.verify_tag
- = sctp_spec->hdr.tag;
- } else if (l3 == RTE_FLOW_ITEM_TYPE_IPV6) {
- filter->input.flow.sctp6_flow.src_port =
- sctp_spec->hdr.src_port;
- filter->input.flow.sctp6_flow.dst_port =
- sctp_spec->hdr.dst_port;
- filter->input.flow.sctp6_flow.verify_tag
- = sctp_spec->hdr.tag;
- }
- }
-
- layer_idx = I40E_FLXPLD_L4_IDX;
-
- break;
- case RTE_FLOW_ITEM_TYPE_RAW:
- raw_spec = item->spec;
- raw_mask = item->mask;
-
- if (!raw_spec || !raw_mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "NULL RAW spec/mask");
- return -rte_errno;
- }
-
- if (pf->support_multi_driver) {
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported flexible payload.");
- return -rte_errno;
- }
-
- ret = i40e_flow_check_raw_item(item, raw_spec, error);
- if (ret < 0)
- return ret;
-
- off_arr[raw_id] = raw_spec->offset;
- len_arr[raw_id] = raw_spec->length;
-
- flex_size = 0;
- memset(&flex_pit, 0, sizeof(struct i40e_fdir_flex_pit));
- field_idx = layer_idx * I40E_MAX_FLXPLD_FIED + raw_id;
- flex_pit.size =
- raw_spec->length / sizeof(uint16_t);
- flex_pit.dst_offset =
- next_dst_off / sizeof(uint16_t);
-
- for (i = 0; i <= raw_id; i++) {
- if (i == raw_id)
- flex_pit.src_offset +=
- raw_spec->offset /
- sizeof(uint16_t);
- else
- flex_pit.src_offset +=
- (off_arr[i] + len_arr[i]) /
- sizeof(uint16_t);
- flex_size += len_arr[i];
- }
- if (((flex_pit.src_offset + flex_pit.size) >=
- I40E_MAX_FLX_SOURCE_OFF / sizeof(uint16_t)) ||
- flex_size > I40E_FDIR_MAX_FLEXLEN) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Exceeds maximal payload limit.");
- return -rte_errno;
- }
-
- if (raw_spec->length != 0) {
- if (raw_spec->pattern == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "NULL RAW spec pattern");
- return -rte_errno;
- }
- if (raw_mask->pattern == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "NULL RAW mask pattern");
- return -rte_errno;
- }
- if (raw_spec->length != raw_mask->length) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "RAW spec and mask length mismatch");
- return -rte_errno;
- }
- }
-
- for (i = 0; i < raw_spec->length; i++) {
- j = i + next_dst_off;
- if (j >= RTE_ETH_FDIR_MAX_FLEXLEN ||
- j >= I40E_FDIR_MAX_FLEX_LEN)
- break;
- filter->input.flow_ext.flexbytes[j] =
- raw_spec->pattern[i];
- filter->input.flow_ext.flex_mask[j] =
- raw_mask->pattern[i];
- }
-
- next_dst_off += raw_spec->length;
- raw_id++;
-
- filter->input.flow_ext.flex_pit[field_idx] = flex_pit;
- filter->input.flow_ext.layer_idx = layer_idx;
- filter->input.flow_ext.raw_id = raw_id;
- filter->input.flow_ext.is_flex_flow = true;
- break;
- case RTE_FLOW_ITEM_TYPE_L2TPV3OIP:
- l2tpv3oip_spec = item->spec;
- l2tpv3oip_mask = item->mask;
-
- if (!l2tpv3oip_spec || !l2tpv3oip_mask)
- break;
-
- if (l2tpv3oip_mask->session_id != UINT32_MAX) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid L2TPv3 mask");
- return -rte_errno;
- }
-
- if (l3 == RTE_FLOW_ITEM_TYPE_IPV4) {
- filter->input.flow.ip4_l2tpv3oip_flow.session_id =
- l2tpv3oip_spec->session_id;
- filter->input.flow_ext.oip_type =
- I40E_FDIR_IPTYPE_IPV4;
- } else if (l3 == RTE_FLOW_ITEM_TYPE_IPV6) {
- filter->input.flow.ip6_l2tpv3oip_flow.session_id =
- l2tpv3oip_spec->session_id;
- filter->input.flow_ext.oip_type =
- I40E_FDIR_IPTYPE_IPV6;
- }
-
- filter->input.flow_ext.customized_pctype = true;
- cus_proto = item_type;
- break;
- default:
- break;
- }
- }
-
- /* Get customized pctype value */
- if (filter->input.flow_ext.customized_pctype) {
- pctype = i40e_flow_fdir_get_pctype_value(pf, cus_proto, filter);
- if (pctype == I40E_FILTER_PCTYPE_INVALID) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Unsupported pctype");
- return -rte_errno;
- }
- }
-
- /* If customized pctype is not used, set fdir configuration.*/
- if (!filter->input.flow_ext.customized_pctype) {
- /* Check if the input set is valid */
- if (i40e_validate_input_set(pctype, RTE_ETH_FILTER_FDIR,
- input_set) != 0) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid input set");
- return -rte_errno;
- }
-
- filter->input.flow_ext.input_set = input_set;
- }
-
- filter->input.pctype = pctype;
-
- return 0;
-}
-
-/* Parse to get the action info of a FDIR filter.
- * FDIR action supports QUEUE or (QUEUE + MARK).
- */
-static int
-i40e_flow_parse_fdir_action(struct rte_eth_dev *dev,
- const struct rte_flow_action *actions,
- struct rte_flow_error *error,
- struct i40e_fdir_filter_conf *filter)
-{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ci_flow_actions parsed_actions = {0};
- struct ci_flow_actions_check_param ac_param = {
- .allowed_types = (enum rte_flow_action_type[]) {
- RTE_FLOW_ACTION_TYPE_QUEUE,
- RTE_FLOW_ACTION_TYPE_DROP,
- RTE_FLOW_ACTION_TYPE_PASSTHRU,
- RTE_FLOW_ACTION_TYPE_MARK,
- RTE_FLOW_ACTION_TYPE_FLAG,
- RTE_FLOW_ACTION_TYPE_RSS,
- RTE_FLOW_ACTION_TYPE_END
- },
- .max_actions = 2,
- };
- const struct rte_flow_action *first, *second;
- int ret;
-
- ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error);
- if (ret)
- return ret;
- first = parsed_actions.actions[0];
- /* can be NULL */
- second = parsed_actions.actions[1];
-
- switch (first->type) {
- case RTE_FLOW_ACTION_TYPE_QUEUE:
- {
- const struct rte_flow_action_queue *act_q = first->conf;
- /* check against PF constraints */
- if (!filter->input.flow_ext.is_vf && act_q->index >= pf->dev_data->nb_rx_queues) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, first,
- "Invalid queue ID for FDIR");
- }
- /* check against VF constraints */
- if (filter->input.flow_ext.is_vf && act_q->index >= pf->vf_nb_qps) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, first,
- "Invalid queue ID for FDIR");
- }
- filter->action.rx_queue = act_q->index;
- filter->action.behavior = I40E_FDIR_ACCEPT;
- break;
- }
- case RTE_FLOW_ACTION_TYPE_DROP:
- filter->action.behavior = I40E_FDIR_REJECT;
- break;
- case RTE_FLOW_ACTION_TYPE_PASSTHRU:
- filter->action.behavior = I40E_FDIR_PASSTHRU;
- break;
- case RTE_FLOW_ACTION_TYPE_MARK:
- {
- const struct rte_flow_action_mark *act_m = first->conf;
- filter->action.behavior = I40E_FDIR_PASSTHRU;
- filter->action.report_status = I40E_FDIR_REPORT_ID;
- filter->soft_id = act_m->id;
- break;
- }
- default:
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, first,
- "Invalid first action for FDIR");
- }
-
- /* do we have another? */
- if (second == NULL)
- return 0;
-
- switch (second->type) {
- case RTE_FLOW_ACTION_TYPE_MARK:
- {
- const struct rte_flow_action_mark *act_m = second->conf;
- /* only one mark action can be specified */
- if (first->type == RTE_FLOW_ACTION_TYPE_MARK) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, second,
- "Invalid second action for FDIR");
- }
- filter->action.report_status = I40E_FDIR_REPORT_ID;
- filter->soft_id = act_m->id;
- break;
- }
- case RTE_FLOW_ACTION_TYPE_FLAG:
- {
- /* mark + flag is unsupported */
- if (first->type == RTE_FLOW_ACTION_TYPE_MARK) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, second,
- "Invalid second action for FDIR");
- }
- filter->action.report_status = I40E_FDIR_NO_REPORT_STATUS;
- break;
- }
- case RTE_FLOW_ACTION_TYPE_RSS:
- /* RSS filter only can be after passthru or mark */
- if (first->type != RTE_FLOW_ACTION_TYPE_PASSTHRU &&
- first->type != RTE_FLOW_ACTION_TYPE_MARK) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, second,
- "Invalid second action for FDIR");
- }
- break;
- default:
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, second,
- "Invalid second action for FDIR");
- }
-
- return 0;
-}
-
-static int
-i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter)
-{
- struct i40e_fdir_filter_conf *fdir_filter = &filter->fdir_filter;
- int ret;
-
- ret = i40e_flow_parse_fdir_pattern(dev, pattern, error, fdir_filter);
- if (ret)
- return ret;
-
- ret = i40e_flow_parse_fdir_action(dev, actions, error, fdir_filter);
- if (ret)
- return ret;
-
- filter->type = RTE_ETH_FILTER_FDIR;
-
- return 0;
-}
-
/* Parse to get the action info of a tunnel filter
* Tunnel action only supports PF, VF and QUEUE.
*/
@@ -3632,7 +1813,6 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_filter_ctx filter_ctx = {0};
struct rte_flow *flow = NULL;
- struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret;
/* try the new engine first */
@@ -3645,55 +1825,15 @@ i40e_flow_create(struct rte_eth_dev *dev,
if (ret < 0)
return NULL;
- if (filter_ctx.type == RTE_ETH_FILTER_FDIR) {
- /* if this is the first time we're creating an fdir flow */
- if (pf->fdir.fdir_vsi == NULL) {
- ret = i40e_fdir_setup(pf);
- if (ret != I40E_SUCCESS) {
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_HANDLE,
- NULL, "Failed to setup fdir.");
- return NULL;
- }
- ret = i40e_fdir_configure(dev);
- if (ret < 0) {
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_HANDLE,
- NULL, "Failed to configure fdir.");
- i40e_fdir_teardown(pf);
- return NULL;
- }
- }
- /* If create the first fdir rule, enable fdir check for rx queues */
- if (TAILQ_EMPTY(&pf->fdir.fdir_list))
- i40e_fdir_rx_proc_enable(dev, 1);
-
- flow = i40e_fdir_entry_pool_get(fdir_info);
- if (flow == NULL) {
- rte_flow_error_set(error, ENOBUFS,
- RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Fdir space full");
-
- return flow;
- }
- } else {
- flow = rte_zmalloc("i40e_flow", sizeof(struct rte_flow), 0);
- if (!flow) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Failed to allocate memory");
- return flow;
- }
+ flow = rte_zmalloc("i40e_flow", sizeof(struct rte_flow), 0);
+ if (!flow) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "Failed to allocate memory");
+ return flow;
}
switch (filter_ctx.type) {
- case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev, &filter_ctx.fdir_filter, 1);
- if (ret)
- goto free_flow;
- flow->rule = TAILQ_LAST(&pf->fdir.fdir_list,
- i40e_fdir_filter_list);
- break;
case RTE_ETH_FILTER_TUNNEL:
ret = i40e_dev_consistent_tunnel_filter_set(pf,
&filter_ctx.consistent_tunnel_filter, 1);
@@ -3722,10 +1862,7 @@ i40e_flow_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
"Failed to create flow.");
- if (filter_ctx.type != RTE_ETH_FILTER_FDIR)
- rte_free(flow);
- else
- i40e_fdir_entry_pool_put(fdir_info, flow);
+ rte_free(flow);
return NULL;
}
@@ -3737,7 +1874,6 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
enum rte_filter_type filter_type = flow->filter_type;
- struct i40e_fdir_info *fdir_info = &pf->fdir;
int ret = 0;
/* try the new engine first */
@@ -3751,16 +1887,6 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
ret = i40e_flow_destroy_tunnel_filter(pf,
(struct i40e_tunnel_filter *)flow->rule);
break;
- case RTE_ETH_FILTER_FDIR:
- ret = i40e_flow_add_del_fdir_filter(dev,
- &((struct i40e_fdir_filter *)flow->rule)->fdir,
- 0);
-
- /* If the last flow is destroyed, disable fdir. */
- if (!ret && TAILQ_EMPTY(&pf->fdir.fdir_list)) {
- i40e_fdir_rx_proc_enable(dev, 0);
- }
- break;
case RTE_ETH_FILTER_HASH:
ret = i40e_hash_filter_destroy(pf, flow->rule);
break;
@@ -3773,10 +1899,7 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
if (!ret) {
TAILQ_REMOVE(&pf->flow_list, flow, node);
- if (filter_type == RTE_ETH_FILTER_FDIR)
- i40e_fdir_entry_pool_put(fdir_info, flow);
- else
- rte_free(flow);
+ rte_free(flow);
} else
rte_flow_error_set(error, -ret,
@@ -3856,14 +1979,6 @@ i40e_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
if (ret != 0)
return ret;
- ret = i40e_flow_flush_fdir_filter(pf);
- if (ret) {
- rte_flow_error_set(error, -ret,
- RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Failed to flush FDIR flows.");
- return -rte_errno;
- }
-
ret = i40e_flow_flush_tunnel_filter(pf);
if (ret) {
rte_flow_error_set(error, -ret,
@@ -3880,66 +1995,6 @@ i40e_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
return ret;
}
-static int
-i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
-{
- struct rte_eth_dev *dev = &rte_eth_devices[pf->dev_data->port_id];
- struct i40e_fdir_info *fdir_info = &pf->fdir;
- struct i40e_fdir_filter *fdir_filter;
- enum i40e_filter_pctype pctype;
- struct rte_flow *flow;
- void *temp;
- int ret;
- uint32_t i = 0;
-
- ret = i40e_fdir_flush(dev);
- if (!ret) {
- /* Delete FDIR filters in FDIR list. */
- while ((fdir_filter = TAILQ_FIRST(&fdir_info->fdir_list))) {
- ret = i40e_sw_fdir_filter_del(pf,
- &fdir_filter->fdir.input);
- if (ret < 0)
- return ret;
- }
-
- /* Delete FDIR flows in flow list. */
- RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
- if (flow->filter_type == RTE_ETH_FILTER_FDIR) {
- TAILQ_REMOVE(&pf->flow_list, flow, node);
- }
- }
-
- /* reset bitmap */
- rte_bitmap_reset(fdir_info->fdir_flow_pool.bitmap);
- for (i = 0; i < fdir_info->fdir_space_size; i++) {
- fdir_info->fdir_flow_pool.pool[i].idx = i;
- rte_bitmap_set(fdir_info->fdir_flow_pool.bitmap, i);
- }
-
- fdir_info->fdir_actual_cnt = 0;
- fdir_info->fdir_guarantee_free_space =
- fdir_info->fdir_guarantee_total_space;
- memset(fdir_info->fdir_filter_array,
- 0,
- sizeof(struct i40e_fdir_filter) *
- I40E_MAX_FDIR_FILTER_NUM);
-
- for (pctype = I40E_FILTER_PCTYPE_NONF_IPV4_UDP;
- pctype <= I40E_FILTER_PCTYPE_L2_PAYLOAD; pctype++) {
- pf->fdir.flow_count[pctype] = 0;
- pf->fdir.flex_mask_flag[pctype] = 0;
- }
-
- for (i = 0; i < I40E_MAX_FLXPLD_LAYER; i++)
- pf->fdir.flex_pit_flag[i] = 0;
-
- /* Disable FDIR processing as all FDIR rules are now flushed */
- i40e_fdir_rx_proc_enable(dev, 0);
- }
-
- return ret;
-}
-
/* Flush all tunnel filters */
static int
i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
diff --git a/drivers/net/intel/i40e/i40e_flow.h b/drivers/net/intel/i40e/i40e_flow.h
index d6efd95216..e6ad1afdba 100644
--- a/drivers/net/intel/i40e/i40e_flow.h
+++ b/drivers/net/intel/i40e/i40e_flow.h
@@ -8,13 +8,19 @@
#include "../common/flow_engine.h"
int i40e_get_outer_vlan(struct rte_eth_dev *dev, uint16_t *tpid);
+uint8_t
+i40e_flow_fdir_get_pctype_value(struct i40e_pf *pf,
+ enum rte_flow_item_type item_type,
+ struct i40e_fdir_filter_conf *filter);
enum i40e_flow_engine_type {
I40E_FLOW_ENGINE_TYPE_ETHERTYPE = 0,
+ I40E_FLOW_ENGINE_TYPE_FDIR,
};
extern const struct ci_flow_engine_list i40e_flow_engine_list;
extern const struct ci_flow_engine i40e_flow_engine_ethertype;
+extern const struct ci_flow_engine i40e_flow_engine_fdir;
#endif /* _I40E_FLOW_H_ */
diff --git a/drivers/net/intel/i40e/i40e_flow_fdir.c b/drivers/net/intel/i40e/i40e_flow_fdir.c
new file mode 100644
index 0000000000..87435e9d2b
--- /dev/null
+++ b/drivers/net/intel/i40e/i40e_flow_fdir.c
@@ -0,0 +1,1806 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include "i40e_ethdev.h"
+#include "i40e_flow.h"
+
+#include <rte_bitmap.h>
+#include <rte_malloc.h>
+
+#include "../common/flow_engine.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+
+struct i40e_fdir_ctx {
+ struct ci_flow_engine_ctx base;
+ struct i40e_fdir_filter_conf fdir_filter;
+ enum rte_flow_item_type custom_pctype;
+ struct flex_item {
+ size_t size;
+ size_t offset;
+ } flex_data[I40E_MAX_FLXPLD_FIED];
+};
+
+struct i40e_flow_engine_fdir_flow {
+ struct rte_flow base;
+ struct i40e_fdir_filter_conf fdir_filter;
+};
+
+struct i40e_fdir_flow_pool_entry {
+ struct i40e_flow_engine_fdir_flow flow;
+ uint32_t idx;
+};
+
+struct i40e_fdir_engine_priv {
+ struct rte_bitmap *bmp;
+ struct i40e_fdir_flow_pool_entry *pool;
+};
+
+#define I40E_FDIR_FLOW_ENTRY(flow_ptr) \
+ container_of((flow_ptr), struct i40e_fdir_flow_pool_entry, flow)
+
+#define I40E_IPV6_FRAG_HEADER 44
+#define I40E_IPV6_TC_MASK (0xFF << I40E_FDIR_IPv6_TC_OFFSET)
+#define I40E_VLAN_TCI_MASK 0xFFFF
+#define I40E_VLAN_PRI_MASK 0xE000
+#define I40E_VLAN_CFI_MASK 0x1000
+#define I40E_VLAN_VID_MASK 0x0FFF
+
+/**
+ * FDIR graph implementation (non-tunnel)
+ * Pattern: START -> ETH -> [VLAN] -> (IPv4 | IPv6) -> [TCP | UDP | SCTP | ESP | L2TPv3 | GTP] -> END
+ * With RAW flexible payload support:
+ * - L2: ETH/VLAN -> RAW -> RAW -> RAW -> END
+ * - L3: IPv4/IPv6 -> RAW -> RAW -> RAW -> END
+ * - L4: TCP/UDP/SCTP -> RAW -> RAW -> RAW -> END
+ * GTP tunnel support:
+ * - IPv4/IPv6 -> UDP -> GTP -> END (GTP-C, GTP-U outer)
+ * - IPv4/IPv6 -> UDP -> GTP -> IPv4/IPv6 -> END (GTP-U with inner IP)
+ */
+
+enum i40e_fdir_node_id {
+ I40E_FDIR_NODE_START = RTE_FLOW_NODE_FIRST,
+ I40E_FDIR_NODE_ETH,
+ I40E_FDIR_NODE_VLAN,
+ I40E_FDIR_NODE_IPV4,
+ I40E_FDIR_NODE_IPV6,
+ I40E_FDIR_NODE_TCP,
+ I40E_FDIR_NODE_UDP,
+ I40E_FDIR_NODE_SCTP,
+ I40E_FDIR_NODE_ESP,
+ I40E_FDIR_NODE_L2TPV3OIP,
+ I40E_FDIR_NODE_GTPC,
+ I40E_FDIR_NODE_GTPU,
+ I40E_FDIR_NODE_INNER_IPV4,
+ I40E_FDIR_NODE_INNER_IPV6,
+ I40E_FDIR_NODE_RAW,
+ I40E_FDIR_NODE_END,
+ I40E_FDIR_NODE_MAX,
+};
+
+static int
+i40e_fdir_node_eth_validate(const void *ctx __rte_unused, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_eth *eth_spec = item->spec;
+ const struct rte_flow_item_eth *eth_mask = item->mask;
+ bool no_src_mac, no_dst_mac, src_mac, dst_mac;
+
+ /* may be empty */
+ if (eth_spec == NULL && eth_mask == NULL)
+ return 0;
+
+ /* source and destination masks may be all zero or all one */
+ no_src_mac = CI_FIELD_IS_ZERO(ð_mask->hdr.src_addr);
+ no_dst_mac = CI_FIELD_IS_ZERO(ð_mask->hdr.dst_addr);
+ src_mac = CI_FIELD_IS_MASKED(ð_mask->hdr.src_addr);
+ dst_mac = CI_FIELD_IS_MASKED(ð_mask->hdr.dst_addr);
+
+ /* can't be all zero */
+ if (no_src_mac && no_dst_mac) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid eth mask");
+ }
+ /* can't be neither zero nor ones */
+ if ((!no_src_mac && !src_mac) ||
+ (!no_dst_mac && !dst_mac)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid eth mask");
+ }
+
+ /* ethertype can either be unmasked or fully masked */
+ if (CI_FIELD_IS_ZERO(ð_mask->hdr.ether_type))
+ return 0;
+
+ if (!CI_FIELD_IS_MASKED(ð_mask->hdr.ether_type)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item, "Invalid ethertype mask");
+ }
+
+ /* Check for valid ethertype (not IPv4/IPv6) */
+ uint16_t ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+ if (ether_type == RTE_ETHER_TYPE_IPV4 ||
+ ether_type == RTE_ETHER_TYPE_IPV6) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "IPv4/IPv6 not supported by ethertype filter");
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_eth_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *fdir_conf = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_eth *eth_spec = item->spec;
+ const struct rte_flow_item_eth *eth_mask = item->mask;
+ uint16_t tpid, ether_type;
+ uint64_t input_set = 0;
+ int ret;
+
+ /* Set layer index for L2 flexible payload (after ETH/VLAN) */
+ fdir_conf->input.flow_ext.layer_idx = I40E_FLXPLD_L2_IDX;
+
+ /* set packet type */
+ fdir_conf->input.pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
+
+ /* do we need to set up MAC addresses? */
+ if (eth_spec == NULL && eth_mask == NULL)
+ return 0;
+
+ /* do we care for source address? */
+ if (CI_FIELD_IS_MASKED(ð_mask->hdr.src_addr)) {
+ fdir_conf->input.flow.l2_flow.src = eth_spec->hdr.src_addr;
+ input_set |= I40E_INSET_SMAC;
+ }
+ /* do we care for destination address? */
+ if (CI_FIELD_IS_MASKED(ð_mask->hdr.dst_addr)) {
+ fdir_conf->input.flow.l2_flow.dst = eth_spec->hdr.dst_addr;
+ input_set |= I40E_INSET_DMAC;
+ }
+
+ /* do we care for ethertype? */
+ if (eth_mask->hdr.ether_type) {
+ ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
+ ret = i40e_get_outer_vlan(fdir_ctx->base.dev, &tpid);
+ if (ret != 0) {
+ return rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Can not get the Ethertype identifying the L2 tag");
+ }
+ if (ether_type == tpid) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Unsupported ether_type in control packet filter.");
+ }
+ fdir_conf->input.flow.l2_flow.ether_type = eth_spec->hdr.ether_type;
+ input_set |= I40E_INSET_LAST_ETHER_TYPE;
+ }
+
+ fdir_conf->input.flow_ext.input_set = input_set;
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_vlan_validate(const void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_vlan *vlan_spec = item->spec;
+ const struct rte_flow_item_vlan *vlan_mask = item->mask;
+ const struct i40e_fdir_ctx *fdir_ctx = ctx;
+ const struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ uint16_t ether_type;
+
+ if (vlan_spec == NULL && vlan_mask == NULL)
+ return 0;
+
+ /* TCI mask is required but must be one of the supported masks */
+ if (vlan_mask->hdr.vlan_tci != rte_cpu_to_be_16(I40E_VLAN_TCI_MASK) &&
+ vlan_mask->hdr.vlan_tci != rte_cpu_to_be_16(I40E_VLAN_PRI_MASK) &&
+ vlan_mask->hdr.vlan_tci != rte_cpu_to_be_16(I40E_VLAN_CFI_MASK) &&
+ vlan_mask->hdr.vlan_tci != rte_cpu_to_be_16(I40E_VLAN_VID_MASK)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Unsupported TCI mask");
+ }
+ if (CI_FIELD_IS_ZERO(&vlan_mask->hdr.eth_proto))
+ return 0;
+
+ if (!CI_FIELD_IS_MASKED(&vlan_mask->hdr.eth_proto)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid VLAN header mask");
+ }
+
+ /* can't match on eth_proto as we're already matching on ethertype */
+ if (filter->input.flow_ext.input_set & I40E_INSET_LAST_ETHER_TYPE) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Cannot set two ethertype filters");
+ }
+
+ ether_type = rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
+ if (ether_type == RTE_ETHER_TYPE_IPV4 ||
+ ether_type == RTE_ETHER_TYPE_IPV6) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "IPv4/IPv6 not supported by VLAN protocol filter");
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_vlan_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_vlan *vlan_spec = item->spec;
+ const struct rte_flow_item_vlan *vlan_mask = item->mask;
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+
+ /* Set layer index for L2 flexible payload (after ETH/VLAN) */
+ filter->input.flow_ext.layer_idx = I40E_FLXPLD_L2_IDX;
+
+ /* set packet type */
+ filter->input.pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
+
+ if (vlan_spec == NULL && vlan_mask == NULL)
+ return 0;
+
+ /* Store TCI value if requested */
+ if (vlan_mask->hdr.vlan_tci) {
+ filter->input.flow_ext.vlan_tci = vlan_spec->hdr.vlan_tci;
+ filter->input.flow_ext.input_set |= I40E_INSET_VLAN_INNER;
+ }
+
+ /* if ethertype specified, store it */
+ if (vlan_mask->hdr.eth_proto) {
+ uint16_t tpid, ether_type;
+ int ret;
+
+ ether_type = rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
+
+ ret = i40e_get_outer_vlan(fdir_ctx->base.dev, &tpid);
+ if (ret != 0) {
+ return rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Can not get the Ethertype identifying the L2 tag");
+ }
+ if (ether_type == tpid) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Unsupported ether_type in control packet filter.");
+ }
+ filter->input.flow.l2_flow.ether_type = vlan_spec->hdr.eth_proto;
+ filter->input.flow_ext.input_set |= I40E_INSET_LAST_ETHER_TYPE;
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_ipv4_validate(const void *ctx __rte_unused, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ipv4 *ipv4_spec = item->spec;
+ const struct rte_flow_item_ipv4 *ipv4_mask = item->mask;
+ const struct rte_flow_item_ipv4 *ipv4_last = item->last;
+
+ if (ipv4_mask == NULL && ipv4_spec == NULL)
+ return 0;
+
+ /* Validate mask fields */
+ if (ipv4_mask->hdr.version_ihl ||
+ ipv4_mask->hdr.total_length ||
+ ipv4_mask->hdr.packet_id ||
+ ipv4_mask->hdr.hdr_checksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv4 header mask");
+ }
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&ipv4_mask->hdr.src_addr) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&ipv4_mask->hdr.dst_addr) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&ipv4_mask->hdr.type_of_service) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&ipv4_mask->hdr.time_to_live) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&ipv4_mask->hdr.next_proto_id)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv4 header mask");
+ }
+
+ if (ipv4_last == NULL)
+ return 0;
+
+ /* Only fragment_offset supports range */
+ if (ipv4_last->hdr.version_ihl ||
+ ipv4_last->hdr.type_of_service ||
+ ipv4_last->hdr.total_length ||
+ ipv4_last->hdr.packet_id ||
+ ipv4_last->hdr.time_to_live ||
+ ipv4_last->hdr.next_proto_id ||
+ ipv4_last->hdr.hdr_checksum ||
+ ipv4_last->hdr.src_addr ||
+ ipv4_last->hdr.dst_addr) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "IPv4 range only supported for fragment_offset");
+ }
+
+ /* Validate fragment_offset range values */
+ uint16_t frag_mask = rte_be_to_cpu_16(ipv4_mask->hdr.fragment_offset);
+ uint16_t frag_spec = rte_be_to_cpu_16(ipv4_spec->hdr.fragment_offset);
+ uint16_t frag_last = rte_be_to_cpu_16(ipv4_last->hdr.fragment_offset);
+
+ /* Mask must be 0x3fff (fragment offset + MF flag) */
+ if (frag_mask != (RTE_IPV4_HDR_OFFSET_MASK | RTE_IPV4_HDR_MF_FLAG)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv4 fragment_offset mask");
+ }
+
+ /* Only allow: frag rule (spec=0x8, last=0x2000) or non-frag (spec=0, last=0) */
+ if (frag_spec == (1 << RTE_IPV4_HDR_FO_SHIFT) &&
+ frag_last == RTE_IPV4_HDR_MF_FLAG)
+ return 0; /* Fragment rule */
+
+ if (frag_spec == 0 && frag_last == 0)
+ return 0; /* Non-fragment rule */
+
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv4 fragment_offset rule");
+}
+
+static int
+i40e_fdir_node_ipv4_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct rte_flow_item_ipv4 *ipv4_spec = item->spec;
+ const struct rte_flow_item_ipv4 *ipv4_mask = item->mask;
+ const struct rte_flow_item_ipv4 *ipv4_last = item->last;
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+
+ /* Set layer index for L2 flexible payload (after ETH/VLAN) */
+ filter->input.flow_ext.layer_idx = I40E_FLXPLD_L3_IDX;
+
+ /* set packet type */
+ filter->input.pctype = I40E_FILTER_PCTYPE_NONF_IPV4_OTHER;
+
+ /* set up flow type */
+ filter->input.flow_ext.inner_ip = false;
+ filter->input.flow_ext.oip_type = I40E_FDIR_IPTYPE_IPV4;
+
+ if (ipv4_mask == NULL && ipv4_spec == NULL)
+ return 0;
+
+ /* Mark that IPv4 fields are used */
+ if (!CI_FIELD_IS_ZERO(&ipv4_mask->hdr.next_proto_id)) {
+ filter->input.flow.ip4_flow.proto = ipv4_spec->hdr.next_proto_id;
+ filter->input.flow_ext.input_set |= I40E_INSET_IPV4_PROTO;
+ }
+ if (!CI_FIELD_IS_ZERO(&ipv4_mask->hdr.type_of_service)) {
+ filter->input.flow.ip4_flow.tos = ipv4_spec->hdr.type_of_service;
+ filter->input.flow_ext.input_set |= I40E_INSET_IPV4_TOS;
+ }
+ if (!CI_FIELD_IS_ZERO(&ipv4_mask->hdr.time_to_live)) {
+ filter->input.flow.ip4_flow.ttl = ipv4_spec->hdr.time_to_live;
+ filter->input.flow_ext.input_set |= I40E_INSET_IPV4_TTL;
+ }
+ if (!CI_FIELD_IS_ZERO(&ipv4_mask->hdr.src_addr)) {
+ filter->input.flow.ip4_flow.src_ip = ipv4_spec->hdr.src_addr;
+ filter->input.flow_ext.input_set |= I40E_INSET_IPV4_SRC;
+ }
+ if (!CI_FIELD_IS_ZERO(&ipv4_mask->hdr.dst_addr)) {
+ filter->input.flow.ip4_flow.dst_ip = ipv4_spec->hdr.dst_addr;
+ filter->input.flow_ext.input_set |= I40E_INSET_IPV4_DST;
+ }
+
+ /* do we have range? */
+ if (ipv4_last == NULL) {
+ filter->input.pctype = I40E_FILTER_PCTYPE_FRAG_IPV4;
+ filter->input.flow_ext.customized_pctype = true;
+ }
+
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_ipv6_validate(const void *ctx __rte_unused, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ipv6 *ipv6_spec = item->spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask = item->mask;
+ if (ipv6_mask == NULL && ipv6_spec == NULL)
+ return 0;
+
+ /* payload len isn't supported */
+ if (ipv6_mask->hdr.payload_len) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv6 header mask");
+ }
+ /* source and destination mask can either be all zeroes or all ones */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&ipv6_mask->hdr.src_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv6 source address mask");
+ }
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&ipv6_mask->hdr.dst_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv6 destination address mask");
+ }
+
+ /* check other supported fields */
+ if (!ci_is_zero_or_masked(ipv6_mask->hdr.vtc_flow, rte_cpu_to_be_32(I40E_IPV6_TC_MASK)) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&ipv6_mask->hdr.proto) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&ipv6_mask->hdr.hop_limits)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid IPv6 header mask");
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_ipv6_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct rte_flow_item_ipv6 *ipv6_spec = item->spec;
+ const struct rte_flow_item_ipv6 *ipv6_mask = item->mask;
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+
+ /* Set layer index for L2 flexible payload (after ETH/VLAN) */
+ filter->input.flow_ext.layer_idx = I40E_FLXPLD_L3_IDX;
+
+ /* set packet type */
+ filter->input.pctype = I40E_FILTER_PCTYPE_NONF_IPV6_OTHER;
+
+ /* set up flow type */
+ filter->input.flow_ext.inner_ip = false;
+ filter->input.flow_ext.oip_type = I40E_FDIR_IPTYPE_IPV6;
+
+ if (ipv6_mask == NULL && ipv6_spec == NULL)
+ return 0;
+ if (CI_FIELD_IS_MASKED(&ipv6_mask->hdr.src_addr)) {
+ memcpy(&filter->input.flow.ipv6_flow.src_ip, &ipv6_spec->hdr.src_addr, sizeof(ipv6_spec->hdr.src_addr));
+ filter->input.flow_ext.input_set |= I40E_INSET_IPV6_SRC;
+ }
+ if (CI_FIELD_IS_MASKED(&ipv6_mask->hdr.dst_addr)) {
+ memcpy(&filter->input.flow.ipv6_flow.dst_ip, &ipv6_spec->hdr.dst_addr, sizeof(ipv6_spec->hdr.dst_addr));
+ filter->input.flow_ext.input_set |= I40E_INSET_IPV6_DST;
+ }
+
+ if (!CI_FIELD_IS_ZERO(&ipv6_mask->hdr.vtc_flow)) {
+ rte_be32_t vtc_flow = rte_be_to_cpu_32(ipv6_mask->hdr.vtc_flow);
+ uint8_t tc = (uint8_t)(vtc_flow & I40E_IPV6_TC_MASK) >> I40E_FDIR_IPv6_TC_OFFSET;
+ filter->input.flow.ipv6_flow.tc = tc;
+ filter->input.flow_ext.input_set |= I40E_INSET_IPV6_TC;
+ }
+ if (!CI_FIELD_IS_ZERO(&ipv6_mask->hdr.proto)) {
+ filter->input.flow.ipv6_flow.proto = ipv6_spec->hdr.proto;
+ filter->input.flow_ext.input_set |= I40E_INSET_IPV6_NEXT_HDR;
+ }
+ if (!CI_FIELD_IS_ZERO(&ipv6_mask->hdr.hop_limits)) {
+ filter->input.flow.ipv6_flow.hop_limits = ipv6_spec->hdr.hop_limits;
+ filter->input.flow_ext.input_set |= I40E_INSET_IPV6_HOP_LIMIT;
+ }
+ /* mark as fragment traffic if necessary */
+ if (ipv6_spec->hdr.proto == I40E_IPV6_FRAG_HEADER) {
+ filter->input.pctype = I40E_FILTER_PCTYPE_FRAG_IPV6;
+ }
+
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_tcp_validate(const void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct i40e_fdir_ctx *fdir_ctx = ctx;
+ const struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_tcp *tcp_spec = item->spec;
+ const struct rte_flow_item_tcp *tcp_mask = item->mask;
+
+ /* cannot match both fragmented and TCP */
+ if (filter->input.pctype == I40E_FILTER_PCTYPE_FRAG_IPV4 ||
+ filter->input.pctype == I40E_FILTER_PCTYPE_FRAG_IPV6) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Cannot combine fragmented IP and TCP match");
+ }
+
+ if (tcp_spec == NULL && tcp_mask == NULL)
+ return 0;
+
+ if (tcp_mask->hdr.sent_seq ||
+ tcp_mask->hdr.recv_ack ||
+ tcp_mask->hdr.data_off ||
+ tcp_mask->hdr.tcp_flags ||
+ tcp_mask->hdr.rx_win ||
+ tcp_mask->hdr.cksum ||
+ tcp_mask->hdr.tcp_urp) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid TCP header mask");
+ }
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&tcp_mask->hdr.src_port) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&tcp_mask->hdr.dst_port)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid TCP header mask");
+ }
+ return 0;
+}
+
+static int
+i40e_fdir_node_tcp_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_tcp *tcp_spec = item->spec;
+ const struct rte_flow_item_tcp *tcp_mask = item->mask;
+ rte_be16_t src_spec, dst_spec, src_mask, dst_mask;
+ bool is_ipv4;
+
+ /* Set layer index for L4 flexible payload */
+ filter->input.flow_ext.layer_idx = I40E_FLXPLD_L4_IDX;
+
+ /* set packet type depending on L3 type */
+ if (filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV4) {
+ filter->input.pctype = I40E_FILTER_PCTYPE_NONF_IPV4_TCP;
+ } else if (filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV6) {
+ filter->input.pctype = I40E_FILTER_PCTYPE_NONF_IPV6_TCP;
+ }
+
+ if (tcp_spec == NULL && tcp_mask == NULL)
+ return 0;
+
+ src_spec = tcp_spec->hdr.src_port;
+ dst_spec = tcp_spec->hdr.dst_port;
+ src_mask = tcp_mask->hdr.src_port;
+ dst_mask = tcp_mask->hdr.dst_port;
+ is_ipv4 = filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV4;
+
+ if (is_ipv4) {
+ if (src_mask != 0) {
+ filter->input.flow_ext.input_set |= I40E_INSET_SRC_PORT;
+ filter->input.flow.tcp4_flow.src_port = src_spec;
+ }
+ if (dst_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_DST_PORT;
+ filter->input.flow.tcp4_flow.dst_port = dst_spec;
+ }
+ } else {
+ if (src_mask != 0) {
+ filter->input.flow_ext.input_set |= I40E_INSET_SRC_PORT;
+ filter->input.flow.tcp6_flow.src_port = src_spec;
+ }
+ if (dst_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_DST_PORT;
+ filter->input.flow.tcp6_flow.dst_port = dst_spec;
+ }
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_udp_validate(const void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct i40e_fdir_ctx *fdir_ctx = ctx;
+ const struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_udp *udp_spec = item->spec;
+ const struct rte_flow_item_udp *udp_mask = item->mask;
+
+ /* cannot match both fragmented and TCP */
+ if (filter->input.pctype == I40E_FILTER_PCTYPE_FRAG_IPV4 ||
+ filter->input.pctype == I40E_FILTER_PCTYPE_FRAG_IPV6) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Cannot combine fragmented IP and UDP match");
+ }
+
+ if (udp_spec == NULL && udp_mask == NULL)
+ return 0;
+
+ if (udp_mask->hdr.dgram_len || udp_mask->hdr.dgram_cksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid UDP header mask");
+ }
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&udp_mask->hdr.src_port) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&udp_mask->hdr.dst_port)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid UDP header mask");
+ }
+ return 0;
+}
+
+static int
+i40e_fdir_node_udp_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_udp *udp_spec = item->spec;
+ const struct rte_flow_item_udp *udp_mask = item->mask;
+ rte_be16_t src_spec, dst_spec, src_mask, dst_mask;
+ bool is_ipv4;
+
+ /* Set layer index for L4 flexible payload */
+ filter->input.flow_ext.layer_idx = I40E_FLXPLD_L4_IDX;
+
+ /* set packet type depending on L3 type */
+ if (filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV4) {
+ filter->input.pctype = I40E_FILTER_PCTYPE_NONF_IPV4_UDP;
+ } else if (filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV6) {
+ filter->input.pctype = I40E_FILTER_PCTYPE_NONF_IPV6_UDP;
+ }
+
+ /* set UDP */
+ filter->input.flow_ext.is_udp = true;
+
+ if (udp_spec == NULL && udp_mask == NULL)
+ return 0;
+
+ src_spec = udp_spec->hdr.src_port;
+ dst_spec = udp_spec->hdr.dst_port;
+ src_mask = udp_mask->hdr.src_port;
+ dst_mask = udp_mask->hdr.dst_port;
+ is_ipv4 = filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV4;
+
+ if (is_ipv4) {
+ if (src_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_SRC_PORT;
+ filter->input.flow.udp4_flow.src_port = src_spec;
+ }
+ if (dst_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_DST_PORT;
+ filter->input.flow.udp4_flow.dst_port = dst_spec;
+ }
+ } else {
+ if (src_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_SRC_PORT;
+ filter->input.flow.udp6_flow.src_port = src_spec;
+ }
+ if (dst_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_DST_PORT;
+ filter->input.flow.udp6_flow.dst_port = dst_spec;
+ }
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_sctp_validate(const void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct i40e_fdir_ctx *fdir_ctx = ctx;
+ const struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_sctp *sctp_spec = item->spec;
+ const struct rte_flow_item_sctp *sctp_mask = item->mask;
+
+ /* cannot match both fragmented and TCP */
+ if (filter->input.pctype == I40E_FILTER_PCTYPE_FRAG_IPV4 ||
+ filter->input.pctype == I40E_FILTER_PCTYPE_FRAG_IPV6) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Cannot combine fragmented IP and SCTP match");
+ }
+
+ if (sctp_spec == NULL && sctp_mask == NULL)
+ return 0;
+
+ if (sctp_mask->hdr.cksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid SCTP header mask");
+ }
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&sctp_mask->hdr.src_port) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&sctp_mask->hdr.dst_port) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&sctp_mask->hdr.tag)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid SCTP header mask");
+ }
+ return 0;
+}
+
+static int
+i40e_fdir_node_sctp_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_sctp *sctp_spec = item->spec;
+ const struct rte_flow_item_sctp *sctp_mask = item->mask;
+ rte_be16_t src_spec, dst_spec, src_mask, dst_mask, tag_spec, tag_mask;
+ bool is_ipv4;
+
+ /* Set layer index for L4 flexible payload */
+ filter->input.flow_ext.layer_idx = I40E_FLXPLD_L4_IDX;
+
+ /* set packet type depending on L3 type */
+ if (filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV4) {
+ filter->input.pctype = I40E_FILTER_PCTYPE_NONF_IPV4_SCTP;
+ } else if (filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV6) {
+ filter->input.pctype = I40E_FILTER_PCTYPE_NONF_IPV6_SCTP;
+ }
+
+ if (sctp_spec == NULL && sctp_mask == NULL)
+ return 0;
+
+ if (!CI_FIELD_IS_ZERO(&sctp_mask->hdr.tag)) {
+ filter->input.flow_ext.input_set |= I40E_INSET_SCTP_VT;
+ }
+
+ src_spec = sctp_spec->hdr.src_port;
+ dst_spec = sctp_spec->hdr.dst_port;
+ src_mask = sctp_mask->hdr.src_port;
+ dst_mask = sctp_mask->hdr.dst_port;
+ tag_spec = sctp_spec->hdr.tag;
+ tag_mask = sctp_mask->hdr.tag;
+ is_ipv4 = filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV4;
+
+ if (is_ipv4) {
+ if (src_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_SRC_PORT;
+ filter->input.flow.sctp4_flow.src_port = src_spec;
+ }
+ if (dst_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_DST_PORT;
+ filter->input.flow.sctp4_flow.dst_port = dst_spec;
+ }
+ if (tag_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_SCTP_VT;
+ filter->input.flow.sctp4_flow.verify_tag = tag_spec;
+ }
+ } else {
+ if (src_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_SRC_PORT;
+ filter->input.flow.sctp6_flow.src_port = src_spec;
+ }
+ if (dst_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_DST_PORT;
+ filter->input.flow.sctp6_flow.dst_port = dst_spec;
+ }
+ if (tag_mask) {
+ filter->input.flow_ext.input_set |= I40E_INSET_SCTP_VT;
+ filter->input.flow.sctp6_flow.verify_tag = tag_spec;
+ }
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_raw_validate(const void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct i40e_fdir_ctx *fdir_ctx = ctx;
+ const struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_eth_dev *dev = fdir_ctx->base.dev;
+ const struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ const struct rte_flow_item_raw *raw_spec = item->spec;
+ const struct rte_flow_item_raw *raw_mask = item->mask;
+ enum i40e_flxpld_layer_idx raw_id = filter->input.flow_ext.raw_id;
+ size_t spec_size, spec_offset;
+ size_t total_size, i;
+ size_t new_src_offset;
+
+ /* we shouldn't write to global registers on some hardware */
+ if (pf->support_multi_driver) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Unsupported flexible payload.");
+ }
+
+ /* Check max RAW items limit */
+ RTE_BUILD_BUG_ON(I40E_MAX_FLXPLD_LAYER != I40E_MAX_FLXPLD_FIED);
+ if (raw_id >= I40E_MAX_FLXPLD_LAYER) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Maximum 3 RAW items allowed per layer");
+ }
+
+ /* Check spec/mask lengths */
+ if (raw_spec->length != raw_mask->length) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid RAW length");
+ }
+
+ /* Relative offset is mandatory */
+ if (!raw_spec->relative) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "RAW relative must be 1");
+ }
+
+ /* Offset must be 16-bit aligned */
+ if (raw_spec->offset % sizeof(uint16_t)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "RAW offset must be even");
+ }
+
+ /* Search and limit not supported */
+ if (raw_spec->search || raw_spec->limit) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "RAW search/limit not supported");
+ }
+
+ /* Offset must be non-negative */
+ if (raw_spec->offset < 0) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "RAW offset must be non-negative");
+ }
+
+ /*
+ * RAW node can be triggered multiple times, each time we will be copying more data to the
+ * flexbyte buffer. we need to validate total size/offset against max allowed because we
+ * cannot overflow our flexbyte buffer.
+ *
+ * all data in the flex pit is stored in units of 2 bytes (words), but all the limits are in
+ * bytes, so we need to convert sizes/offsets accordingly.
+ */
+
+ /* flex size/offset for current item (in bytes) */
+ spec_size = raw_spec->length;
+ spec_offset = raw_spec->offset;
+
+ /* accumulate all raw items size/offset */
+ total_size = 0;
+ new_src_offset = 0;
+ for (i = 0; i < raw_id; i++) {
+ const struct flex_item *fi = &fdir_ctx->flex_data[i];
+ total_size += fi->size;
+ /* offset is relative to end of previous item */
+ new_src_offset += fi->offset + fi->size;
+ }
+ /* add current item to totals */
+ total_size += spec_size;
+ new_src_offset += spec_offset;
+
+ /* validate against max offset/size */
+ if (spec_size + new_src_offset > I40E_MAX_FLX_SOURCE_OFF) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "RAW total offset exceeds maximum");
+ }
+ if (total_size > I40E_FDIR_MAX_FLEXLEN) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "RAW total size exceeds maximum");
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_raw_process(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_raw *raw_spec = item->spec;
+ const struct rte_flow_item_raw *raw_mask = item->mask;
+ enum i40e_flxpld_layer_idx raw_id = filter->input.flow_ext.raw_id;
+ enum i40e_flxpld_layer_idx layer_idx = filter->input.flow_ext.layer_idx;
+ size_t flex_pit_field_idx = layer_idx * I40E_MAX_FLXPLD_FIED + raw_id;
+ struct i40e_fdir_flex_pit *flex_pit;
+ size_t spec_size, spec_offset, i;
+ size_t total_size, new_src_offset;
+
+ /* flex size for current item */
+ spec_size = raw_spec->length;
+ spec_offset = raw_spec->offset;
+
+ /* store these for future reference */
+ fdir_ctx->flex_data[raw_id].size = spec_size;
+ fdir_ctx->flex_data[raw_id].offset = spec_offset;
+
+ /* accumulate all raw items size/offset */
+ total_size = 0;
+ new_src_offset = 0;
+ for (i = 0; i < raw_id; i++) {
+ const struct flex_item *fi = &fdir_ctx->flex_data[i];
+ total_size += fi->size;
+ /* offset is relative to end of previous item */
+ new_src_offset += fi->offset + fi->size;
+ }
+
+ /* copy bytes into the flex pit buffer */
+ for (i = 0; i < spec_size; i++) {
+ /* convert to byte offset */
+ const size_t start = total_size * sizeof(uint16_t);
+ size_t j = start + i;
+ filter->input.flow_ext.flexbytes[j] = raw_spec->pattern[i];
+ filter->input.flow_ext.flex_mask[j] = raw_mask->pattern[i];
+ }
+
+ /* populate flex pit */
+ flex_pit = &filter->input.flow_ext.flex_pit[flex_pit_field_idx];
+ /* convert to words (2-byte units) */
+ flex_pit->src_offset = (uint16_t)new_src_offset / sizeof(uint16_t);
+ flex_pit->dst_offset = (uint16_t)total_size / sizeof(uint16_t);
+ flex_pit->size = (uint16_t)spec_size / sizeof(uint16_t);
+
+ /* increment raw item index */
+ filter->input.flow_ext.raw_id++;
+
+ /* mark as flex flow */
+ filter->input.flow_ext.is_flex_flow = true;
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_esp_validate(const void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct i40e_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_eth_dev *dev = fdir_ctx->base.dev;
+ const struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ const struct rte_flow_item_esp *esp_mask = item->mask;
+
+ if (!pf->esp_support) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Protocol not supported");
+ }
+
+ /* SPI must be fully masked */
+ if (!CI_FIELD_IS_MASKED(&esp_mask->hdr.spi)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid ESP header mask");
+ }
+ return 0;
+}
+
+static int
+i40e_fdir_node_esp_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_esp *esp_spec = item->spec;
+ bool is_ipv4 = filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV4;
+ bool is_udp = filter->input.flow_ext.is_udp;
+
+ /* ESP uses customized pctype */
+ filter->input.flow_ext.customized_pctype = true;
+ fdir_ctx->custom_pctype = item->type;
+
+ if (is_ipv4) {
+ if (is_udp)
+ filter->input.flow.esp_ipv4_udp_flow.spi = esp_spec->hdr.spi;
+ else {
+ filter->input.flow.esp_ipv4_flow.spi = esp_spec->hdr.spi;
+ }
+ } else {
+ if (is_udp)
+ filter->input.flow.esp_ipv6_udp_flow.spi = esp_spec->hdr.spi;
+ else {
+ filter->input.flow.esp_ipv6_flow.spi = esp_spec->hdr.spi;
+ }
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_l2tpv3oip_validate(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_l2tpv3oip *l2tp_mask = item->mask;
+
+ if (!CI_FIELD_IS_MASKED(&l2tp_mask->session_id)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid L2TPv3oIP header mask");
+ }
+ return 0;
+
+}
+
+static int
+i40e_fdir_node_l2tpv3oip_process(void *ctx,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_l2tpv3oip *l2tp_spec = item->spec;
+
+ /* L2TPv3 uses customized pctype */
+ filter->input.flow_ext.customized_pctype = true;
+ fdir_ctx->custom_pctype = item->type;
+
+ /* Store session_id in appropriate flow union member based on IP version */
+ if (filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV4) {
+ filter->input.flow.ip4_l2tpv3oip_flow.session_id = l2tp_spec->session_id;
+ } else if (filter->input.flow_ext.oip_type == I40E_FDIR_IPTYPE_IPV6) {
+ filter->input.flow.ip6_l2tpv3oip_flow.session_id = l2tp_spec->session_id;
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_gtp_validate(const void *ctx __rte_unused, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct i40e_fdir_ctx *fdir_ctx = ctx;
+ const struct rte_eth_dev *dev = fdir_ctx->base.dev;
+ const struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ const struct rte_flow_item_gtp *gtp_mask = item->mask;
+
+ /* DDP may not support this packet type */
+ if (!pf->gtp_support) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Protocol not supported");
+ }
+
+ if (gtp_mask->hdr.gtp_hdr_info ||
+ gtp_mask->hdr.msg_type ||
+ gtp_mask->hdr.plen) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid GTP header mask");
+ }
+ /* if GTP is specified, TEID must be masked */
+ if (!CI_FIELD_IS_MASKED(>p_mask->hdr.teid)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid GTP header mask");
+ }
+ return 0;
+}
+
+static int
+i40e_fdir_node_gtp_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ const struct rte_flow_item_gtp *gtp_spec = item->spec;
+
+ /* Mark as GTP tunnel with customized pctype */
+ filter->input.flow_ext.customized_pctype = true;
+ fdir_ctx->custom_pctype = item->type;
+
+ filter->input.flow.gtp_flow.teid = gtp_spec->teid;
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_inner_ipv4_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+
+ /* Mark as inner IP */
+ filter->input.flow_ext.inner_ip = true;
+ filter->input.flow_ext.iip_type = I40E_FDIR_IPTYPE_IPV4;
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_inner_ipv6_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+
+ /* Mark as inner IP */
+ filter->input.flow_ext.inner_ip = true;
+ filter->input.flow_ext.iip_type = I40E_FDIR_IPTYPE_IPV6;
+
+ return 0;
+}
+
+/* END node validation for FDIR - performs pctype determination and input_set validation */
+static int
+i40e_fdir_node_end_validate(const void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct i40e_fdir_ctx *fdir_ctx = ctx;
+ const struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ uint64_t input_set = filter->input.flow_ext.input_set;
+ enum i40e_filter_pctype pctype = filter->input.pctype;
+
+ /*
+ * Before sending the configuration down to hardware, we need to make
+ * sure that the configuration makes sense - more specifically, that the
+ * input set is a valid one that is actually supported by the hardware.
+ * This is validated for built-in ptypes, however for customized ptypes,
+ * the validation is skipped, and we have no way of validating the input
+ * set because we do not have that information at our disposal - the
+ * input set for customized packet type is not available through DDP
+ * queries.
+ *
+ * However, we do know that some things are unsupported by the hardware no matter the
+ * configuration. We can check for them here.
+ */
+ const uint64_t i40e_l2_input_set = I40E_INSET_DMAC | I40E_INSET_SMAC;
+ const uint64_t i40e_l3_input_set = (I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ I40E_INSET_IPV4_TOS | I40E_INSET_IPV4_TTL |
+ I40E_INSET_IPV4_PROTO);
+ const uint64_t i40e_l4_input_set = (I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
+ const bool l2_in_set = (input_set & i40e_l2_input_set) != 0;
+ const bool l3_in_set = (input_set & i40e_l3_input_set) != 0;
+ const bool l4_in_set = (input_set & i40e_l4_input_set) != 0;
+
+ /* if we're matching ethertype, we may be matching L2 only, and cannot have RAW patterns */
+ if ((input_set & I40E_INSET_LAST_ETHER_TYPE) != 0 &&
+ (pctype != I40E_FILTER_PCTYPE_L2_PAYLOAD ||
+ filter->input.flow_ext.is_flex_flow)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Cannot match ethertype with L3/L4 or RAW patterns");
+ }
+
+ /* L2 and L3 input sets are exclusive */
+ if (l2_in_set && l3_in_set) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Matching both L2 and L3 is not supported");
+ }
+ /* L2 and L4 input sets are exclusive */
+ if (l2_in_set && l4_in_set) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Matching both L2 and L4 is not supported");
+ }
+
+ /* if we are using one of the builtin packet types, validate it */
+ if (!filter->input.flow_ext.customized_pctype) {
+ /* validate the input set for the built-in pctype */
+ if (i40e_validate_input_set(pctype, RTE_ETH_FILTER_FDIR, input_set) != 0) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid input set");
+ return -rte_errno;
+ }
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_node_end_process(void *ctx __rte_unused, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_fdir_ctx *fdir_ctx = ctx;
+ struct i40e_fdir_filter_conf *filter = &fdir_ctx->fdir_filter;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(fdir_ctx->base.dev->data->dev_private);
+
+ /* Get customized pctype value */
+ if (filter->input.flow_ext.customized_pctype) {
+ enum i40e_filter_pctype pctype = i40e_flow_fdir_get_pctype_value(pf,
+ fdir_ctx->custom_pctype, filter);
+ if (pctype == I40E_FILTER_PCTYPE_INVALID) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Unsupported packet type");
+ return -rte_errno;
+ }
+ /* update FDIR packet type */
+ filter->input.pctype = pctype;
+ }
+
+ return 0;
+}
+
+const struct rte_flow_graph i40e_fdir_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [I40E_FDIR_NODE_START] = { .name = "START" },
+ [I40E_FDIR_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_eth_validate,
+ .process = i40e_fdir_node_eth_process,
+ },
+ [I40E_FDIR_NODE_VLAN] = {
+ .name = "VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_vlan_validate,
+ .process = i40e_fdir_node_vlan_process,
+ },
+ [I40E_FDIR_NODE_IPV4] = {
+ .name = "IPv4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK |
+ RTE_FLOW_NODE_EXPECT_RANGE,
+ .validate = i40e_fdir_node_ipv4_validate,
+ .process = i40e_fdir_node_ipv4_process,
+ },
+ [I40E_FDIR_NODE_IPV6] = {
+ .name = "IPv6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_ipv6_validate,
+ .process = i40e_fdir_node_ipv6_process,
+ },
+ [I40E_FDIR_NODE_TCP] = {
+ .name = "TCP",
+ .type = RTE_FLOW_ITEM_TYPE_TCP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_tcp_validate,
+ .process = i40e_fdir_node_tcp_process,
+ },
+ [I40E_FDIR_NODE_UDP] = {
+ .name = "UDP",
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_udp_validate,
+ .process = i40e_fdir_node_udp_process,
+ },
+ [I40E_FDIR_NODE_SCTP] = {
+ .name = "SCTP",
+ .type = RTE_FLOW_ITEM_TYPE_SCTP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_sctp_validate,
+ .process = i40e_fdir_node_sctp_process,
+ },
+ [I40E_FDIR_NODE_ESP] = {
+ .name = "ESP",
+ .type = RTE_FLOW_ITEM_TYPE_ESP,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_esp_validate,
+ .process = i40e_fdir_node_esp_process,
+ },
+ [I40E_FDIR_NODE_L2TPV3OIP] = {
+ .name = "L2TPV3OIP",
+ .type = RTE_FLOW_ITEM_TYPE_L2TPV3OIP,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_l2tpv3oip_validate,
+ .process = i40e_fdir_node_l2tpv3oip_process,
+ },
+ [I40E_FDIR_NODE_GTPC] = {
+ .name = "GTPC",
+ .type = RTE_FLOW_ITEM_TYPE_GTPC,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_gtp_validate,
+ .process = i40e_fdir_node_gtp_process,
+ },
+ [I40E_FDIR_NODE_GTPU] = {
+ .name = "GTPU",
+ .type = RTE_FLOW_ITEM_TYPE_GTPU,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_gtp_validate,
+ .process = i40e_fdir_node_gtp_process,
+ },
+ [I40E_FDIR_NODE_INNER_IPV4] = {
+ .name = "INNER_IPv4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .validate = i40e_fdir_node_ipv4_validate,
+ .process = i40e_fdir_node_inner_ipv4_process,
+ },
+ [I40E_FDIR_NODE_INNER_IPV6] = {
+ .name = "INNER_IPv6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .validate = i40e_fdir_node_ipv6_validate,
+ .process = i40e_fdir_node_inner_ipv6_process,
+ },
+ [I40E_FDIR_NODE_RAW] = {
+ .name = "RAW",
+ .type = RTE_FLOW_ITEM_TYPE_RAW,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_fdir_node_raw_validate,
+ .process = i40e_fdir_node_raw_process,
+ },
+ [I40E_FDIR_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ .validate = i40e_fdir_node_end_validate,
+ .process = i40e_fdir_node_end_process
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [I40E_FDIR_NODE_START] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_ETH] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_VLAN,
+ I40E_FDIR_NODE_IPV4,
+ I40E_FDIR_NODE_IPV6,
+ I40E_FDIR_NODE_RAW,
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_VLAN] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_IPV4,
+ I40E_FDIR_NODE_IPV6,
+ I40E_FDIR_NODE_RAW,
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_TCP,
+ I40E_FDIR_NODE_UDP,
+ I40E_FDIR_NODE_SCTP,
+ I40E_FDIR_NODE_ESP,
+ I40E_FDIR_NODE_L2TPV3OIP,
+ I40E_FDIR_NODE_RAW,
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_IPV6] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_TCP,
+ I40E_FDIR_NODE_UDP,
+ I40E_FDIR_NODE_SCTP,
+ I40E_FDIR_NODE_ESP,
+ I40E_FDIR_NODE_L2TPV3OIP,
+ I40E_FDIR_NODE_RAW,
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_TCP] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_RAW,
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_UDP] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_GTPC,
+ I40E_FDIR_NODE_GTPU,
+ I40E_FDIR_NODE_ESP,
+ I40E_FDIR_NODE_RAW,
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_SCTP] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_RAW,
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_ESP] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_L2TPV3OIP] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_GTPC] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_GTPU] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_INNER_IPV4,
+ I40E_FDIR_NODE_INNER_IPV6,
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_INNER_IPV4] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_INNER_IPV6] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_FDIR_NODE_RAW] = {
+ .next = (const size_t[]) {
+ I40E_FDIR_NODE_RAW,
+ I40E_FDIR_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+static int
+i40e_fdir_action_check(const struct ci_flow_actions *actions,
+ const struct ci_flow_actions_check_param *param,
+ struct rte_flow_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(param->driver_ctx);
+ const struct rte_flow_action *first, *second;
+
+ first = actions->actions[0];
+ /* can be NULL */
+ second = actions->actions[1];
+
+ switch (first->type) {
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ {
+ const struct rte_flow_action_queue *act_q = first->conf;
+ /* check against PF constraints */
+ if (act_q->index >= pf->dev_data->nb_rx_queues) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, first,
+ "Invalid queue ID for FDIR");
+ }
+ break;
+ }
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ case RTE_FLOW_ACTION_TYPE_PASSTHRU:
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ break;
+ default:
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, first,
+ "Invalid first action for FDIR");
+ }
+
+ /* do we have another? */
+ if (second == NULL)
+ return 0;
+
+ switch (second->type) {
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ {
+ /* only one mark action can be specified */
+ if (first->type == RTE_FLOW_ACTION_TYPE_MARK) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, second,
+ "Invalid second action for FDIR");
+ }
+ break;
+ }
+ case RTE_FLOW_ACTION_TYPE_FLAG:
+ {
+ /* mark + flag is unsupported */
+ if (first->type == RTE_FLOW_ACTION_TYPE_MARK) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, second,
+ "Invalid second action for FDIR");
+ }
+ break;
+ }
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ /* RSS filter only can be after passthru or mark */
+ if (first->type != RTE_FLOW_ACTION_TYPE_PASSTHRU &&
+ first->type != RTE_FLOW_ACTION_TYPE_MARK) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, second,
+ "Invalid second action for FDIR");
+ }
+ break;
+ default:
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, second,
+ "Invalid second action for FDIR");
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_ctx_parse(const struct rte_flow_action *actions,
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct i40e_adapter *adapter = ctx->dev->data->dev_private;
+ struct i40e_fdir_ctx *fdir_ctx = (struct i40e_fdir_ctx *)ctx;
+ struct ci_flow_actions parsed_actions = {0};
+ struct ci_flow_actions_check_param ac_param = {
+ .allowed_types = (enum rte_flow_action_type[]) {
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_DROP,
+ RTE_FLOW_ACTION_TYPE_PASSTHRU,
+ RTE_FLOW_ACTION_TYPE_MARK,
+ RTE_FLOW_ACTION_TYPE_FLAG,
+ RTE_FLOW_ACTION_TYPE_RSS,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .max_actions = 2,
+ .driver_ctx = adapter,
+ .check = i40e_fdir_action_check,
+ };
+ int ret;
+ const struct rte_flow_action *first, *second;
+
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret) {
+ return ret;
+ }
+
+ ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error);
+ if (ret) {
+ return ret;
+ }
+
+ first = parsed_actions.actions[0];
+ /* can be NULL */
+ second = parsed_actions.actions[1];
+
+ if (first->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+ const struct rte_flow_action_queue *act_q = first->conf;
+ fdir_ctx->fdir_filter.action.rx_queue = act_q->index;
+ fdir_ctx->fdir_filter.action.behavior = I40E_FDIR_ACCEPT;
+ } else if (first->type == RTE_FLOW_ACTION_TYPE_DROP) {
+ fdir_ctx->fdir_filter.action.behavior = I40E_FDIR_REJECT;
+ } else if (first->type == RTE_FLOW_ACTION_TYPE_PASSTHRU) {
+ fdir_ctx->fdir_filter.action.behavior = I40E_FDIR_PASSTHRU;
+ } else if (first->type == RTE_FLOW_ACTION_TYPE_MARK) {
+ const struct rte_flow_action_mark *act_m = first->conf;
+ fdir_ctx->fdir_filter.action.behavior = I40E_FDIR_PASSTHRU;
+ fdir_ctx->fdir_filter.action.report_status = I40E_FDIR_REPORT_ID;
+ fdir_ctx->fdir_filter.soft_id = act_m->id;
+ }
+
+ if (second != NULL) {
+ if (second->type == RTE_FLOW_ACTION_TYPE_MARK) {
+ const struct rte_flow_action_mark *act_m = second->conf;
+ fdir_ctx->fdir_filter.action.report_status = I40E_FDIR_REPORT_ID;
+ fdir_ctx->fdir_filter.soft_id = act_m->id;
+ } else if (second->type == RTE_FLOW_ACTION_TYPE_FLAG) {
+ fdir_ctx->fdir_filter.action.report_status = I40E_FDIR_NO_REPORT_STATUS;
+ }
+ /* RSS action does nothing */
+ }
+ return 0;
+}
+
+static int
+i40e_fdir_flow_install(struct ci_flow *flow, struct rte_flow_error *error)
+{
+ struct i40e_flow_engine_fdir_flow *fdir_flow = (struct i40e_flow_engine_fdir_flow *)flow;
+ struct rte_eth_dev *dev = flow->dev;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ bool need_teardown = false;
+ bool need_rx_proc_disable = false;
+ int ret;
+
+ /* if fdir is not configured, configure it */
+ if (pf->fdir.fdir_vsi == NULL) {
+ ret = i40e_fdir_setup(pf);
+ if (ret != I40E_SUCCESS) {
+ ret = rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_HANDLE,
+ NULL, "Failed to setup fdir.");
+ goto err;
+ }
+ /* if something failed down the line, teardown is needed */
+ need_teardown = true;
+ ret = i40e_fdir_configure(dev);
+ if (ret < 0) {
+ ret = rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_HANDLE,
+ NULL, "Failed to configure fdir.");
+ goto err;
+ }
+ }
+
+ /* if this is first flow, enable fdir check for rx queues */
+ if (pf->fdir.num_fdir_flows == 0) {
+ i40e_fdir_rx_proc_enable(dev, 1);
+ /* if something failed down the line, we need to disable fdir check for rx queues */
+ need_rx_proc_disable = true;
+ }
+
+ ret = i40e_flow_add_del_fdir_filter(dev, &fdir_flow->fdir_filter, 1);
+ if (ret != 0) {
+ ret = rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_HANDLE,
+ NULL, "Failed to add fdir filter.");
+ goto err;
+ }
+
+ /* we got flows now */
+ pf->fdir.num_fdir_flows++;
+
+ return 0;
+err:
+ if (need_rx_proc_disable)
+ i40e_fdir_rx_proc_enable(dev, 0);
+ if (need_teardown)
+ i40e_fdir_teardown(pf);
+ return ret;
+}
+
+static int
+i40e_fdir_flow_uninstall(struct ci_flow *flow, struct rte_flow_error *error __rte_unused)
+{
+ struct rte_eth_dev *dev = flow->dev;
+ struct i40e_flow_engine_fdir_flow *fdir_flow = (struct i40e_flow_engine_fdir_flow *)flow;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ int ret;
+
+ ret = i40e_flow_add_del_fdir_filter(dev, &fdir_flow->fdir_filter, 0);
+ if (ret != 0) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_HANDLE,
+ NULL, "Failed to delete fdir filter.");
+ }
+
+ /* we are removing a flow */
+ if (pf->fdir.num_fdir_flows > 0)
+ pf->fdir.num_fdir_flows--;
+
+ /* if there are no more flows, disable fdir check for rx queues and teardown fdir */
+ if (pf->fdir.num_fdir_flows == 0) {
+ i40e_fdir_rx_proc_enable(dev, 0);
+ i40e_fdir_teardown(pf);
+ }
+
+ return 0;
+}
+
+static int
+i40e_fdir_flow_engine_init(const struct ci_flow_engine *engine,
+ struct rte_eth_dev *dev,
+ void *priv_data)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_fdir_info *fdir_info = &pf->fdir;
+ struct i40e_fdir_engine_priv *priv = priv_data;
+ struct i40e_fdir_flow_pool_entry *pool;
+ struct rte_bitmap *bmp;
+ uint32_t bmp_size;
+ void *bmp_mem;
+ uint32_t i;
+
+ pool = rte_zmalloc(engine->name,
+ fdir_info->fdir_space_size * sizeof(*pool), 0);
+ if (pool == NULL)
+ return -ENOMEM;
+
+ bmp_size = rte_bitmap_get_memory_footprint(fdir_info->fdir_space_size);
+ bmp_mem = rte_zmalloc("fdir_bmap", bmp_size, RTE_CACHE_LINE_SIZE);
+ if (bmp_mem == NULL) {
+ rte_free(pool);
+ return -ENOMEM;
+ }
+
+ bmp = rte_bitmap_init(fdir_info->fdir_space_size, bmp_mem, bmp_size);
+ if (bmp == NULL) {
+ if (bmp_mem != NULL)
+ rte_free(bmp_mem);
+ if (pool != NULL)
+ rte_free(pool);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < fdir_info->fdir_space_size; i++) {
+ pool[i].idx = i;
+ rte_bitmap_set(bmp, i);
+ }
+
+ priv->pool = pool;
+ priv->bmp = bmp;
+
+ return 0;
+}
+
+static void
+i40e_fdir_flow_engine_uninit(const struct ci_flow_engine *engine __rte_unused,
+ void *priv_data)
+{
+ struct i40e_fdir_engine_priv *priv = priv_data;
+
+ if (priv->bmp != NULL)
+ rte_free(priv->bmp);
+ if (priv->pool != NULL)
+ rte_free(priv->pool);
+}
+
+static struct ci_flow *
+i40e_fdir_flow_alloc(const struct ci_flow_engine *engine,
+ struct rte_eth_dev *dev)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_fdir_info *fdir_info = &pf->fdir;
+ struct i40e_fdir_engine_priv *priv = ci_flow_engine_priv(&pf->flow_engine_conf, engine->type);
+ uint64_t slab = 0;
+ uint32_t pos = 0;
+ uint32_t bit;
+ int ret;
+
+ if (priv == NULL || priv->pool == NULL || priv->bmp == NULL)
+ return NULL;
+
+ if (fdir_info->fdir_actual_cnt >= fdir_info->fdir_space_size)
+ return NULL;
+
+ ret = rte_bitmap_scan(priv->bmp, &pos, &slab);
+ if (ret == 0)
+ return NULL;
+
+ bit = rte_bsf64(slab);
+ pos += bit;
+ rte_bitmap_clear(priv->bmp, pos);
+
+ memset(&priv->pool[pos].flow, 0, sizeof(priv->pool[pos].flow));
+ return (struct ci_flow *)&priv->pool[pos].flow;
+}
+
+static void
+i40e_fdir_flow_free(struct ci_flow *flow)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(flow->dev->data->dev_private);
+ struct i40e_fdir_engine_priv *priv = ci_flow_engine_priv(&pf->flow_engine_conf, flow->engine_type);
+ struct i40e_fdir_flow_pool_entry *entry;
+
+ entry = I40E_FDIR_FLOW_ENTRY((struct i40e_flow_engine_fdir_flow *)flow);
+ rte_bitmap_set(priv->bmp, entry->idx);
+}
+
+const struct ci_flow_engine_ops i40e_flow_engine_fdir_ops = {
+ .init = i40e_fdir_flow_engine_init,
+ .uninit = i40e_fdir_flow_engine_uninit,
+ .flow_alloc = i40e_fdir_flow_alloc,
+ .flow_free = i40e_fdir_flow_free,
+ .ctx_parse = i40e_fdir_ctx_parse,
+ .flow_install = i40e_fdir_flow_install,
+ .flow_uninstall = i40e_fdir_flow_uninstall,
+};
+
+const struct ci_flow_engine i40e_flow_engine_fdir = {
+ .name = "i40e_fdir",
+ .type = I40E_FLOW_ENGINE_TYPE_FDIR,
+ .ops = &i40e_flow_engine_fdir_ops,
+ .ctx_size = sizeof(struct i40e_fdir_ctx),
+ .flow_size = sizeof(struct i40e_flow_engine_fdir_flow),
+ .priv_size = sizeof(struct i40e_fdir_engine_priv),
+ .graph = &i40e_fdir_graph,
+};
diff --git a/drivers/net/intel/i40e/meson.build b/drivers/net/intel/i40e/meson.build
index bff0518fc9..0638f873dd 100644
--- a/drivers/net/intel/i40e/meson.build
+++ b/drivers/net/intel/i40e/meson.build
@@ -26,6 +26,7 @@ sources += files(
'i40e_fdir.c',
'i40e_flow.c',
'i40e_flow_ethertype.c',
+ 'i40e_flow_fdir.c',
'i40e_tm.c',
'i40e_hash.c',
'i40e_vf_representor.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 15/21] net/i40e: reimplement tunnel QinQ parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (13 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 14/21] net/i40e: reimplement FDIR parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 16/21] net/i40e: reimplement VXLAN parser Anatoly Burakov
` (6 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Use the new flow graph API and the common parsing framework to implement
flow parser for tunnel QinQ filters.
As a result of transitioning to more formalized VLAN validation, some
checks have become more stringent:
- VLAN TCI mask is now required to be fully masked (all-ones); previously
the mask was only checked for eth_proto and any non-zero vlan_tci mask
value was silently accepted
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 132 +--------
drivers/net/intel/i40e/i40e_flow.h | 2 +
drivers/net/intel/i40e/i40e_flow_tunnel.c | 338 ++++++++++++++++++++++
drivers/net/intel/i40e/meson.build | 1 +
4 files changed, 342 insertions(+), 131 deletions(-)
create mode 100644 drivers/net/intel/i40e/i40e_flow_tunnel.c
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 44dcb4f5b2..3ca528a1f3 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -33,6 +33,7 @@ const struct ci_flow_engine_list i40e_flow_engine_list = {
{
&i40e_flow_engine_ethertype,
&i40e_flow_engine_fdir,
+ &i40e_flow_engine_tunnel_qinq,
}
};
@@ -87,17 +88,6 @@ static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
struct i40e_tunnel_filter *filter);
static int i40e_flow_flush_tunnel_filter(struct i40e_pf *pf);
-static int
-i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter);
-static int
-i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev,
- const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct i40e_tunnel_filter_conf *filter);
static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
@@ -291,13 +281,6 @@ static enum rte_flow_item_type pattern_mpls_4[] = {
RTE_FLOW_ITEM_TYPE_END,
};
-static enum rte_flow_item_type pattern_qinq_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
static struct i40e_valid_pattern i40e_supported_patterns[] = {
/* VXLAN */
{ pattern_vxlan_1, i40e_flow_parse_vxlan_filter },
@@ -319,8 +302,6 @@ static struct i40e_valid_pattern i40e_supported_patterns[] = {
{ pattern_fdir_ipv4_gtpu, i40e_flow_parse_gtp_filter },
{ pattern_fdir_ipv6_gtpc, i40e_flow_parse_gtp_filter },
{ pattern_fdir_ipv6_gtpu, i40e_flow_parse_gtp_filter },
- /* QINQ */
- { pattern_qinq_1, i40e_flow_parse_qinq_filter },
/* L4 over port */
{ pattern_fdir_ipv4_udp, i40e_flow_parse_l4_cloud_filter },
{ pattern_fdir_ipv4_tcp, i40e_flow_parse_l4_cloud_filter },
@@ -1586,117 +1567,6 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
return ret;
}
-/* 1. Last in item should be NULL as range is not supported.
- * 2. Supported filter types: QINQ.
- * 3. Mask of fields which need to be matched should be
- * filled with 1.
- * 4. Mask of fields which needn't to be matched should be
- * filled with 0.
- */
-static int
-i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev,
- const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct i40e_tunnel_filter_conf *filter)
-{
- const struct rte_flow_item *item = pattern;
- const struct rte_flow_item_vlan *vlan_spec = NULL;
- const struct rte_flow_item_vlan *vlan_mask = NULL;
- const struct rte_flow_item_vlan *i_vlan_spec = NULL;
- const struct rte_flow_item_vlan *i_vlan_mask = NULL;
- const struct rte_flow_item_vlan *o_vlan_spec = NULL;
- const struct rte_flow_item_vlan *o_vlan_mask = NULL;
-
- enum rte_flow_item_type item_type;
- bool vlan_flag = 0;
-
- for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Not support range");
- return -rte_errno;
- }
- item_type = item->type;
- switch (item_type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ETH item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_VLAN:
- vlan_spec = item->spec;
- vlan_mask = item->mask;
-
- if (!(vlan_spec && vlan_mask) ||
- vlan_mask->hdr.eth_proto) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid vlan item");
- return -rte_errno;
- }
-
- if (!vlan_flag) {
- o_vlan_spec = vlan_spec;
- o_vlan_mask = vlan_mask;
- vlan_flag = 1;
- } else {
- i_vlan_spec = vlan_spec;
- i_vlan_mask = vlan_mask;
- vlan_flag = 0;
- }
- break;
-
- default:
- break;
- }
- }
-
- /* Get filter specification */
- if (o_vlan_mask != NULL && i_vlan_mask != NULL) {
- filter->outer_vlan = rte_be_to_cpu_16(o_vlan_spec->hdr.vlan_tci);
- filter->inner_vlan = rte_be_to_cpu_16(i_vlan_spec->hdr.vlan_tci);
- } else {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL,
- "Invalid filter type");
- return -rte_errno;
- }
-
- filter->tunnel_type = I40E_TUNNEL_TYPE_QINQ;
- return 0;
-}
-
-static int
-i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter)
-{
- struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
- int ret;
-
- ret = i40e_flow_parse_qinq_pattern(dev, pattern,
- error, tunnel_filter);
- if (ret)
- return ret;
-
- ret = i40e_flow_parse_tunnel_action(dev, actions, error, tunnel_filter);
- if (ret)
- return ret;
-
- filter->type = RTE_ETH_FILTER_TUNNEL;
-
- return ret;
-}
static int
i40e_flow_check(struct rte_eth_dev *dev,
diff --git a/drivers/net/intel/i40e/i40e_flow.h b/drivers/net/intel/i40e/i40e_flow.h
index e6ad1afdba..c578351eb4 100644
--- a/drivers/net/intel/i40e/i40e_flow.h
+++ b/drivers/net/intel/i40e/i40e_flow.h
@@ -16,11 +16,13 @@ i40e_flow_fdir_get_pctype_value(struct i40e_pf *pf,
enum i40e_flow_engine_type {
I40E_FLOW_ENGINE_TYPE_ETHERTYPE = 0,
I40E_FLOW_ENGINE_TYPE_FDIR,
+ I40E_FLOW_ENGINE_TYPE_TUNNEL_QINQ,
};
extern const struct ci_flow_engine_list i40e_flow_engine_list;
extern const struct ci_flow_engine i40e_flow_engine_ethertype;
extern const struct ci_flow_engine i40e_flow_engine_fdir;
+extern const struct ci_flow_engine i40e_flow_engine_tunnel_qinq;
#endif /* _I40E_FLOW_H_ */
diff --git a/drivers/net/intel/i40e/i40e_flow_tunnel.c b/drivers/net/intel/i40e/i40e_flow_tunnel.c
new file mode 100644
index 0000000000..621354d6ea
--- /dev/null
+++ b/drivers/net/intel/i40e/i40e_flow_tunnel.c
@@ -0,0 +1,338 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include "i40e_ethdev.h"
+#include "i40e_flow.h"
+
+#include "../common/flow_engine.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+
+struct i40e_tunnel_ctx {
+ struct ci_flow_engine_ctx base;
+ struct i40e_tunnel_filter_conf filter;
+};
+
+struct i40e_tunnel_flow {
+ struct rte_flow base;
+ struct i40e_tunnel_filter_conf filter;
+};
+
+/**
+ * QinQ tunnel filter graph implementation
+ * Pattern: START -> ETH -> OUTER_VLAN -> INNER_VLAN -> END
+ */
+enum i40e_tunnel_qinq_node_id {
+ I40E_TUNNEL_QINQ_NODE_START = RTE_FLOW_NODE_FIRST,
+ I40E_TUNNEL_QINQ_NODE_ETH,
+ I40E_TUNNEL_QINQ_NODE_OUTER_VLAN,
+ I40E_TUNNEL_QINQ_NODE_INNER_VLAN,
+ I40E_TUNNEL_QINQ_NODE_END,
+ I40E_TUNNEL_QINQ_NODE_MAX,
+};
+
+static int
+i40e_tunnel_node_vlan_validate(const void *ctx __rte_unused, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_vlan *vlan_mask = item->mask;
+
+ /* matching eth proto not supported */
+ if (vlan_mask->hdr.eth_proto) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid VLAN mask");
+ }
+
+ /* VLAN TCI must be fully masked */
+ if (!CI_FIELD_IS_MASKED(&vlan_mask->hdr.vlan_tci)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid VLAN mask");
+ }
+
+ return 0;
+}
+
+/* common VLAN processing for both outer and inner VLAN nodes */
+static int
+i40e_tunnel_node_vlan_process(struct i40e_tunnel_ctx *tunnel_ctx,
+ const struct rte_flow_item *item, bool is_inner)
+{
+ const struct rte_flow_item_vlan *vlan_spec = item->spec;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ /* Store the VLAN ID and set filter flag */
+ if (is_inner) {
+ tunnel_filter->inner_vlan = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
+ tunnel_filter->filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
+ } else {
+ tunnel_filter->outer_vlan = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci);
+ /* no special flag for outer VLAN matching */
+ }
+
+ return 0;
+}
+
+static int
+i40e_tunnel_node_outer_vlan_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+
+ return i40e_tunnel_node_vlan_process(tunnel_ctx, item, false);
+}
+
+static int
+i40e_tunnel_node_inner_vlan_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+
+ return i40e_tunnel_node_vlan_process(tunnel_ctx, item, true);
+}
+
+static int
+i40e_tunnel_qinq_node_end_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ tunnel_filter->tunnel_type = I40E_TUNNEL_TYPE_QINQ;
+
+ /* QinQ filter is not meant to set this flag */
+ tunnel_filter->filter_type &= ~RTE_ETH_TUNNEL_FILTER_IVLAN;
+
+ return 0;
+}
+
+const struct rte_flow_graph i40e_tunnel_qinq_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [I40E_TUNNEL_QINQ_NODE_START] = {
+ .name = "START",
+ },
+ [I40E_TUNNEL_QINQ_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [I40E_TUNNEL_QINQ_NODE_OUTER_VLAN] = {
+ .name = "OUTER_VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_vlan_validate,
+ .process = i40e_tunnel_node_outer_vlan_process,
+ },
+ [I40E_TUNNEL_QINQ_NODE_INNER_VLAN] = {
+ .name = "INNER_VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_vlan_validate,
+ .process = i40e_tunnel_node_inner_vlan_process,
+ },
+ [I40E_TUNNEL_QINQ_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ .process = i40e_tunnel_qinq_node_end_process,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [I40E_TUNNEL_QINQ_NODE_START] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_QINQ_NODE_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_QINQ_NODE_ETH] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_QINQ_NODE_OUTER_VLAN,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_QINQ_NODE_OUTER_VLAN] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_QINQ_NODE_INNER_VLAN,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_QINQ_NODE_INNER_VLAN] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_QINQ_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+static int
+i40e_tunnel_action_check(const struct ci_flow_actions *actions,
+ const struct ci_flow_actions_check_param *param,
+ struct rte_flow_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(param->driver_ctx);
+ const struct rte_flow_action *first, *second;
+ const struct rte_flow_action_queue *act_q;
+ bool is_to_vf;
+
+ first = actions->actions[0];
+ /* can be NULL */
+ second = actions->actions[1];
+
+ /* first action must be PF or VF */
+ if (first->type == RTE_FLOW_ACTION_TYPE_VF) {
+ const struct rte_flow_action_vf *vf = first->conf;
+ if (vf->id >= pf->vf_num) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, first,
+ "Invalid VF ID for tunnel filter");
+ }
+ is_to_vf = true;
+ } else if (first->type != RTE_FLOW_ACTION_TYPE_PF) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, first,
+ "Unsupported action");
+ }
+
+ /* check if second action is QUEUE */
+ if (second == NULL)
+ return 0;
+
+ if (second->type != RTE_FLOW_ACTION_TYPE_QUEUE) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, second,
+ "Unsupported action");
+ }
+
+ act_q = second->conf;
+ /* check queue ID for PF flow */
+ if (!is_to_vf && act_q->index >= pf->dev_data->nb_rx_queues) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q,
+ "Invalid queue ID for tunnel filter");
+ }
+ /* check queue ID for VF flow */
+ if (is_to_vf && act_q->index >= pf->vf_nb_qps) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q,
+ "Invalid queue ID for tunnel filter");
+ }
+
+ return 0;
+}
+
+static int
+i40e_tunnel_ctx_parse(const struct rte_flow_action actions[],
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = (struct i40e_tunnel_ctx *)ctx;
+ struct ci_flow_actions parsed_actions = {0};
+ struct ci_flow_actions_check_param ac_param = {
+ .allowed_types = (enum rte_flow_action_type[]) {
+ RTE_FLOW_ACTION_TYPE_QUEUE,
+ RTE_FLOW_ACTION_TYPE_PF,
+ RTE_FLOW_ACTION_TYPE_VF,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .max_actions = 2,
+ .check = i40e_tunnel_action_check,
+ .driver_ctx = ctx->dev->data->dev_private,
+ };
+ const struct rte_flow_action *first, *second;
+ const struct rte_flow_action_queue *act_q;
+ int ret;
+
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret)
+ return ret;
+
+ ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error);
+ if (ret)
+ return ret;
+
+ first = parsed_actions.actions[0];
+ /* can be NULL */
+ second = parsed_actions.actions[1];
+
+ if (first->type == RTE_FLOW_ACTION_TYPE_VF) {
+ const struct rte_flow_action_vf *vf = first->conf;
+ tunnel_ctx->filter.vf_id = vf->id;
+ tunnel_ctx->filter.is_to_vf = 1;
+ } else if (first->type == RTE_FLOW_ACTION_TYPE_PF) {
+ tunnel_ctx->filter.is_to_vf = 0;
+ }
+
+ /* check if second action is QUEUE */
+ if (second == NULL)
+ return 0;
+
+ act_q = second->conf;
+ tunnel_ctx->filter.queue_id = act_q->index;
+
+ return 0;
+}
+
+static int
+i40e_tunnel_ctx_to_flow(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct i40e_tunnel_ctx *tunnel_ctx = (const struct i40e_tunnel_ctx *)ctx;
+ struct i40e_tunnel_flow *tunnel_flow = (struct i40e_tunnel_flow *)flow;
+
+ /* copy filter configuration from context to flow */
+ tunnel_flow->filter = tunnel_ctx->filter;
+
+ return 0;
+}
+
+static int
+i40e_tunnel_flow_install(struct ci_flow *flow, struct rte_flow_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(flow->dev->data->dev_private);
+ struct i40e_tunnel_flow *tunnel_flow = (struct i40e_tunnel_flow *)flow;
+ int ret;
+
+ ret = i40e_dev_consistent_tunnel_filter_set(pf, &tunnel_flow->filter, 1);
+ if (ret) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to install tunnel filter");
+ }
+ return 0;
+}
+
+static int
+i40e_tunnel_flow_uninstall(struct ci_flow *flow, struct rte_flow_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(flow->dev->data->dev_private);
+ struct i40e_tunnel_flow *tunnel_flow = (struct i40e_tunnel_flow *)flow;
+ int ret;
+
+ ret = i40e_dev_consistent_tunnel_filter_set(pf, &tunnel_flow->filter, 0);
+ if (ret) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to uninstall tunnel filter");
+ }
+ return 0;
+}
+
+const struct ci_flow_engine_ops i40e_flow_engine_tunnel_ops = {
+ .ctx_parse = i40e_tunnel_ctx_parse,
+ .ctx_to_flow = i40e_tunnel_ctx_to_flow,
+ .flow_install = i40e_tunnel_flow_install,
+ .flow_uninstall = i40e_tunnel_flow_uninstall,
+};
+
+const struct ci_flow_engine i40e_flow_engine_tunnel_qinq = {
+ .name = "i40e_tunnel_qinq",
+ .type = I40E_FLOW_ENGINE_TYPE_TUNNEL_QINQ,
+ .ops = &i40e_flow_engine_tunnel_ops,
+ .ctx_size = sizeof(struct i40e_tunnel_ctx),
+ .flow_size = sizeof(struct i40e_tunnel_flow),
+ .graph = &i40e_tunnel_qinq_graph,
+};
diff --git a/drivers/net/intel/i40e/meson.build b/drivers/net/intel/i40e/meson.build
index 0638f873dd..9cff46b5e6 100644
--- a/drivers/net/intel/i40e/meson.build
+++ b/drivers/net/intel/i40e/meson.build
@@ -27,6 +27,7 @@ sources += files(
'i40e_flow.c',
'i40e_flow_ethertype.c',
'i40e_flow_fdir.c',
+ 'i40e_flow_tunnel.c',
'i40e_tm.c',
'i40e_hash.c',
'i40e_vf_representor.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 16/21] net/i40e: reimplement VXLAN parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (14 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 15/21] net/i40e: reimplement tunnel QinQ parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 17/21] net/i40e: reimplement NVGRE parser Anatoly Burakov
` (5 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Use the new flow graph API and the common parsing framework to implement
flow parser for VXLAN tunnels.
For VLAN nodes we will use existing VLAN node validation which is more
stringent, so we no longer allow any non-zero `vlan_tci` mask, only full
mask.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 295 +-------------------
drivers/net/intel/i40e/i40e_flow.h | 3 +
drivers/net/intel/i40e/i40e_flow_tunnel.c | 311 ++++++++++++++++++++++
3 files changed, 325 insertions(+), 284 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 3ca528a1f3..1b1547a8ac 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -34,6 +34,7 @@ const struct ci_flow_engine_list i40e_flow_engine_list = {
&i40e_flow_engine_ethertype,
&i40e_flow_engine_fdir,
&i40e_flow_engine_tunnel_qinq,
+ &i40e_flow_engine_tunnel_vxlan,
}
};
@@ -65,11 +66,6 @@ static int i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
struct i40e_tunnel_filter_conf *filter);
-static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter);
static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
@@ -177,44 +173,6 @@ static enum rte_flow_item_type pattern_fdir_ipv6_gtpu[] = {
};
/* Pattern matched tunnel filter */
-static enum rte_flow_item_type pattern_vxlan_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_VXLAN,
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_vxlan_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_VXLAN,
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_vxlan_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_VXLAN,
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_vxlan_4[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_VXLAN,
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
static enum rte_flow_item_type pattern_nvgre_1[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_IPV4,
@@ -282,11 +240,6 @@ static enum rte_flow_item_type pattern_mpls_4[] = {
};
static struct i40e_valid_pattern i40e_supported_patterns[] = {
- /* VXLAN */
- { pattern_vxlan_1, i40e_flow_parse_vxlan_filter },
- { pattern_vxlan_2, i40e_flow_parse_vxlan_filter },
- { pattern_vxlan_3, i40e_flow_parse_vxlan_filter },
- { pattern_vxlan_4, i40e_flow_parse_vxlan_filter },
/* NVGRE */
{ pattern_nvgre_1, i40e_flow_parse_nvgre_filter },
{ pattern_nvgre_2, i40e_flow_parse_nvgre_filter },
@@ -776,253 +729,27 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
return ret;
}
-static uint16_t i40e_supported_tunnel_filter_types[] = {
- RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
- RTE_ETH_TUNNEL_FILTER_IVLAN,
- RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
- RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
- RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
- RTE_ETH_TUNNEL_FILTER_IMAC,
- RTE_ETH_TUNNEL_FILTER_IMAC,
-};
-
-static int
+int
i40e_check_tunnel_filter_type(uint8_t filter_type)
{
+ const uint16_t i40e_supported_tunnel_filter_types[] = {
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+ RTE_ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+ RTE_ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC,
+ };
uint8_t i;
for (i = 0; i < RTE_DIM(i40e_supported_tunnel_filter_types); i++) {
if (filter_type == i40e_supported_tunnel_filter_types[i])
return 0;
}
-
return -1;
}
-/* 1. Last in item should be NULL as range is not supported.
- * 2. Supported filter types: IMAC_IVLAN_TENID, IMAC_IVLAN,
- * IMAC_TENID, OMAC_TENID_IMAC and IMAC.
- * 3. Mask of fields which need to be matched should be
- * filled with 1.
- * 4. Mask of fields which needn't to be matched should be
- * filled with 0.
- */
-static int
-i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
- const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct i40e_tunnel_filter_conf *filter)
-{
- const struct rte_flow_item *item = pattern;
- const struct rte_flow_item_eth *eth_spec;
- const struct rte_flow_item_eth *eth_mask;
- const struct rte_flow_item_vxlan *vxlan_spec;
- const struct rte_flow_item_vxlan *vxlan_mask;
- const struct rte_flow_item_vlan *vlan_spec;
- const struct rte_flow_item_vlan *vlan_mask;
- uint8_t filter_type = 0;
- bool is_vni_masked = 0;
- uint8_t vni_mask[] = {0xFF, 0xFF, 0xFF};
- enum rte_flow_item_type item_type;
- bool vxlan_flag = 0;
- uint32_t tenant_id_be = 0;
- int ret;
-
- for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Not support range");
- return -rte_errno;
- }
- item_type = item->type;
- switch (item_type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- eth_spec = item->spec;
- eth_mask = item->mask;
-
- /* Check if ETH item is used for place holder.
- * If yes, both spec and mask should be NULL.
- * If no, both spec and mask shouldn't be NULL.
- */
- if ((!eth_spec && eth_mask) ||
- (eth_spec && !eth_mask)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ether spec/mask");
- return -rte_errno;
- }
-
- if (eth_spec && eth_mask) {
- /* DST address of inner MAC shouldn't be masked.
- * SRC address of Inner MAC should be masked.
- */
- if (!rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr) ||
- !rte_is_zero_ether_addr(ð_mask->hdr.src_addr) ||
- eth_mask->hdr.ether_type) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ether spec/mask");
- return -rte_errno;
- }
-
- if (!vxlan_flag) {
- rte_memcpy(&filter->outer_mac,
- ð_spec->hdr.dst_addr,
- RTE_ETHER_ADDR_LEN);
- filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
- } else {
- rte_memcpy(&filter->inner_mac,
- ð_spec->hdr.dst_addr,
- RTE_ETHER_ADDR_LEN);
- filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
- }
- }
- break;
- case RTE_FLOW_ITEM_TYPE_VLAN:
- vlan_spec = item->spec;
- vlan_mask = item->mask;
- if (!(vlan_spec && vlan_mask) ||
- vlan_mask->hdr.eth_proto) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid vlan item");
- return -rte_errno;
- }
-
- if (vlan_spec && vlan_mask) {
- if (vlan_mask->hdr.vlan_tci ==
- rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
- filter->inner_vlan =
- rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
- I40E_VLAN_TCI_MASK;
- filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_IPV4:
- filter->ip_type = I40E_TUNNEL_IPTYPE_IPV4;
- /* IPv4 is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv4 item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- filter->ip_type = I40E_TUNNEL_IPTYPE_IPV6;
- /* IPv6 is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv6 item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- /* UDP is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid UDP item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_VXLAN:
- vxlan_spec = item->spec;
- vxlan_mask = item->mask;
- /* Check if VXLAN item is used to describe protocol.
- * If yes, both spec and mask should be NULL.
- * If no, both spec and mask shouldn't be NULL.
- */
- if ((!vxlan_spec && vxlan_mask) ||
- (vxlan_spec && !vxlan_mask)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid VXLAN item");
- return -rte_errno;
- }
-
- /* Check if VNI is masked. */
- if (vxlan_spec && vxlan_mask) {
- is_vni_masked =
- !!memcmp(vxlan_mask->hdr.vni, vni_mask,
- RTE_DIM(vni_mask));
- if (is_vni_masked) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid VNI mask");
- return -rte_errno;
- }
-
- rte_memcpy(((uint8_t *)&tenant_id_be + 1),
- vxlan_spec->hdr.vni, 3);
- filter->tenant_id =
- rte_be_to_cpu_32(tenant_id_be);
- filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
- }
-
- vxlan_flag = 1;
- break;
- default:
- break;
- }
- }
-
- ret = i40e_check_tunnel_filter_type(filter_type);
- if (ret < 0) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL,
- "Invalid filter type");
- return -rte_errno;
- }
- filter->filter_type = filter_type;
-
- filter->tunnel_type = I40E_TUNNEL_TYPE_VXLAN;
-
- return 0;
-}
-
-static int
-i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter)
-{
- struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
- int ret;
-
- ret = i40e_flow_parse_vxlan_pattern(dev, pattern,
- error, tunnel_filter);
- if (ret)
- return ret;
-
- ret = i40e_flow_parse_tunnel_action(dev, actions, error, tunnel_filter);
- if (ret)
- return ret;
-
- filter->type = RTE_ETH_FILTER_TUNNEL;
-
- return ret;
-}
-
/* 1. Last in item should be NULL as range is not supported.
* 2. Supported filter types: IMAC_IVLAN_TENID, IMAC_IVLAN,
* IMAC_TENID, OMAC_TENID_IMAC and IMAC.
diff --git a/drivers/net/intel/i40e/i40e_flow.h b/drivers/net/intel/i40e/i40e_flow.h
index c578351eb4..0981b4569a 100644
--- a/drivers/net/intel/i40e/i40e_flow.h
+++ b/drivers/net/intel/i40e/i40e_flow.h
@@ -12,11 +12,13 @@ uint8_t
i40e_flow_fdir_get_pctype_value(struct i40e_pf *pf,
enum rte_flow_item_type item_type,
struct i40e_fdir_filter_conf *filter);
+int i40e_check_tunnel_filter_type(uint8_t filter_type);
enum i40e_flow_engine_type {
I40E_FLOW_ENGINE_TYPE_ETHERTYPE = 0,
I40E_FLOW_ENGINE_TYPE_FDIR,
I40E_FLOW_ENGINE_TYPE_TUNNEL_QINQ,
+ I40E_FLOW_ENGINE_TYPE_TUNNEL_VXLAN,
};
extern const struct ci_flow_engine_list i40e_flow_engine_list;
@@ -24,5 +26,6 @@ extern const struct ci_flow_engine_list i40e_flow_engine_list;
extern const struct ci_flow_engine i40e_flow_engine_ethertype;
extern const struct ci_flow_engine i40e_flow_engine_fdir;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_qinq;
+extern const struct ci_flow_engine i40e_flow_engine_tunnel_vxlan;
#endif /* _I40E_FLOW_H_ */
diff --git a/drivers/net/intel/i40e/i40e_flow_tunnel.c b/drivers/net/intel/i40e/i40e_flow_tunnel.c
index 621354d6ea..ec6107dde0 100644
--- a/drivers/net/intel/i40e/i40e_flow_tunnel.c
+++ b/drivers/net/intel/i40e/i40e_flow_tunnel.c
@@ -166,6 +166,308 @@ const struct rte_flow_graph i40e_tunnel_qinq_graph = {
},
};
+/**
+ * VXLAN tunnel filter graph implementation
+ * Pattern: START -> ETH -> (IPv4 | IPv6) -> UDP -> VXLAN -> ETH -> [VLAN] -> END
+ */
+enum i40e_tunnel_vxlan_node_id {
+ I40E_TUNNEL_VXLAN_NODE_START = RTE_FLOW_NODE_FIRST,
+ I40E_TUNNEL_VXLAN_NODE_OUTER_ETH,
+ I40E_TUNNEL_VXLAN_NODE_IPV4,
+ I40E_TUNNEL_VXLAN_NODE_IPV6,
+ I40E_TUNNEL_VXLAN_NODE_UDP,
+ I40E_TUNNEL_VXLAN_NODE_VXLAN,
+ I40E_TUNNEL_VXLAN_NODE_INNER_ETH,
+ I40E_TUNNEL_VXLAN_NODE_INNER_VLAN,
+ I40E_TUNNEL_VXLAN_NODE_END,
+ I40E_TUNNEL_VXLAN_NODE_MAX,
+};
+
+static int
+i40e_tunnel_node_eth_validate(const void *ctx __rte_unused, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_eth *eth_spec = item->spec;
+ const struct rte_flow_item_eth *eth_mask = item->mask;
+
+ /* spec/mask is optional */
+ if (eth_spec == NULL && eth_mask == NULL)
+ return 0;
+
+ /* matching eth type not supported */
+ if (eth_mask->hdr.ether_type) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid ETH mask");
+ }
+
+ /* source MAC must be fully unmasked */
+ if (!CI_FIELD_IS_ZERO(ð_mask->hdr.src_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid ETH mask");
+ }
+ /* destination MAC must be fully masked */
+ if (!CI_FIELD_IS_MASKED(ð_mask->hdr.dst_addr)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid ETH mask");
+ }
+
+ return 0;
+}
+
+static int
+i40e_tunnel_eth_process(struct i40e_tunnel_ctx *tunnel_ctx,
+ const struct rte_flow_item *item, bool is_inner)
+{
+ const struct rte_flow_item_eth *eth_spec = item->spec;
+ const struct rte_flow_item_eth *eth_mask = item->mask;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ /* eth spec/mask is optional */
+ if (eth_spec == NULL && eth_mask == NULL)
+ return 0;
+
+ /* Store the MAC addresses and set filter flags */
+ if (is_inner) {
+ memcpy(&tunnel_filter->inner_mac, ð_spec->hdr.dst_addr,
+ sizeof(tunnel_filter->inner_mac));
+ tunnel_filter->filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
+ } else {
+ memcpy(&tunnel_filter->outer_mac, ð_spec->hdr.dst_addr,
+ sizeof(tunnel_filter->outer_mac));
+ tunnel_filter->filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
+ }
+ return 0;
+}
+
+static int
+i40e_tunnel_node_outer_eth_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+
+ return i40e_tunnel_eth_process(tunnel_ctx, item, false);
+}
+
+static int
+i40e_tunnel_inner_eth_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+
+ return i40e_tunnel_eth_process(tunnel_ctx, item, true);
+}
+
+static int
+i40e_tunnel_node_ipv4_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ tunnel_filter->ip_type = I40E_TUNNEL_IPTYPE_IPV4;
+
+ return 0;
+}
+
+static int
+i40e_tunnel_node_ipv6_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ tunnel_filter->ip_type = I40E_TUNNEL_IPTYPE_IPV6;
+
+ return 0;
+}
+
+static int
+i40e_tunnel_node_vxlan_validate(const void *ctx __rte_unused,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct rte_flow_item_vxlan *vxlan_spec = item->spec;
+ const struct rte_flow_item_vxlan *vxlan_mask = item->mask;
+
+ /* spec/mask are optional */
+ if (vxlan_spec == NULL && vxlan_mask == NULL)
+ return 0;
+
+ /* VNI must be fully masked */
+ if (!CI_FIELD_IS_MASKED(&vxlan_mask->hdr.vni)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid VXLAN mask");
+ }
+ return 0;
+}
+
+static int
+i40e_tunnel_node_vxlan_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct rte_flow_item_vxlan *vxlan_spec = item->spec;
+ const struct rte_flow_item_vxlan *vxlan_mask = item->mask;
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ /* spec/mask are optional */
+ if (vxlan_spec == NULL && vxlan_mask == NULL)
+ return 0;
+
+ /* Store the VNI and set filter flag */
+ tunnel_filter->tenant_id = ci_be24_to_cpu(vxlan_spec->hdr.vni);
+ tunnel_filter->filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
+
+ return 0;
+}
+
+static int
+i40e_tunnel_node_end_validate(const void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ const struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ /* this shouldn't happen but check this just in case */
+ if (i40e_check_tunnel_filter_type(tunnel_filter->filter_type) != 0) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid tunnel filter configuration");
+ }
+ return 0;
+}
+
+static int
+i40e_tunnel_vxlan_node_end_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ tunnel_filter->tunnel_type = I40E_TUNNEL_TYPE_VXLAN;
+
+ return 0;
+}
+
+const struct rte_flow_graph i40e_tunnel_vxlan_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [I40E_TUNNEL_VXLAN_NODE_START] = {
+ .name = "START",
+ },
+ [I40E_TUNNEL_VXLAN_NODE_OUTER_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_eth_validate,
+ .process = i40e_tunnel_node_outer_eth_process,
+ },
+ [I40E_TUNNEL_VXLAN_NODE_IPV4] = {
+ .name = "IPv4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_node_ipv4_process,
+ },
+ [I40E_TUNNEL_VXLAN_NODE_IPV6] = {
+ .name = "IPv6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_node_ipv6_process,
+ },
+ [I40E_TUNNEL_VXLAN_NODE_UDP] = {
+ .name = "UDP",
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [I40E_TUNNEL_VXLAN_NODE_VXLAN] = {
+ .name = "VXLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VXLAN,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_vxlan_validate,
+ .process = i40e_tunnel_node_vxlan_process,
+ },
+ [I40E_TUNNEL_VXLAN_NODE_INNER_ETH] = {
+ .name = "INNER_ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_eth_validate,
+ .process = i40e_tunnel_inner_eth_process,
+ },
+ [I40E_TUNNEL_VXLAN_NODE_INNER_VLAN] = {
+ .name = "INNER_VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_vlan_validate,
+ .process = i40e_tunnel_node_inner_vlan_process,
+ },
+ [I40E_TUNNEL_VXLAN_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ .validate = i40e_tunnel_node_end_validate,
+ .process = i40e_tunnel_vxlan_node_end_process
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [I40E_TUNNEL_VXLAN_NODE_START] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_VXLAN_NODE_OUTER_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_VXLAN_NODE_OUTER_ETH] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_VXLAN_NODE_IPV4,
+ I40E_TUNNEL_VXLAN_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_VXLAN_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_VXLAN_NODE_UDP,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_VXLAN_NODE_IPV6] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_VXLAN_NODE_UDP,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_VXLAN_NODE_UDP] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_VXLAN_NODE_VXLAN,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_VXLAN_NODE_VXLAN] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_VXLAN_NODE_INNER_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_VXLAN_NODE_INNER_ETH] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_VXLAN_NODE_INNER_VLAN,
+ I40E_TUNNEL_VXLAN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_VXLAN_NODE_INNER_VLAN] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_VXLAN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
static int
i40e_tunnel_action_check(const struct ci_flow_actions *actions,
const struct ci_flow_actions_check_param *param,
@@ -328,6 +630,15 @@ const struct ci_flow_engine_ops i40e_flow_engine_tunnel_ops = {
.flow_uninstall = i40e_tunnel_flow_uninstall,
};
+const struct ci_flow_engine i40e_flow_engine_tunnel_vxlan = {
+ .name = "i40e_tunnel_vxlan",
+ .type = I40E_FLOW_ENGINE_TYPE_TUNNEL_VXLAN,
+ .ops = &i40e_flow_engine_tunnel_ops,
+ .ctx_size = sizeof(struct i40e_tunnel_ctx),
+ .flow_size = sizeof(struct i40e_tunnel_flow),
+ .graph = &i40e_tunnel_vxlan_graph,
+};
+
const struct ci_flow_engine i40e_flow_engine_tunnel_qinq = {
.name = "i40e_tunnel_qinq",
.type = I40E_FLOW_ENGINE_TYPE_TUNNEL_QINQ,
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 17/21] net/i40e: reimplement NVGRE parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (15 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 16/21] net/i40e: reimplement VXLAN parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 18/21] net/i40e: reimplement MPLS parser Anatoly Burakov
` (4 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Use the new flow graph API and the common parsing framework to implement
flow parser for NVGRE tunnels.
For VLAN nodes we will use existing VLAN node validation which is more
stringent, so we no longer allow any non-zero `vlan_tci` mask, only full
mask.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 295 +---------------------
drivers/net/intel/i40e/i40e_flow.h | 2 +
drivers/net/intel/i40e/i40e_flow_tunnel.c | 207 +++++++++++++++
3 files changed, 210 insertions(+), 294 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 1b1547a8ac..c2c9bec9ce 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -35,14 +35,10 @@ const struct ci_flow_engine_list i40e_flow_engine_list = {
&i40e_flow_engine_fdir,
&i40e_flow_engine_tunnel_qinq,
&i40e_flow_engine_tunnel_vxlan,
+ &i40e_flow_engine_tunnel_nvgre,
}
};
-#define I40E_VLAN_TCI_MASK 0xFFFF
-#define I40E_VLAN_PRI_MASK 0xE000
-#define I40E_VLAN_CFI_MASK 0x1000
-#define I40E_VLAN_VID_MASK 0x0FFF
-
static int i40e_flow_validate(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item pattern[],
@@ -66,11 +62,6 @@ static int i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
struct i40e_tunnel_filter_conf *filter);
-static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter);
static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
@@ -173,40 +164,6 @@ static enum rte_flow_item_type pattern_fdir_ipv6_gtpu[] = {
};
/* Pattern matched tunnel filter */
-static enum rte_flow_item_type pattern_nvgre_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_NVGRE,
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_nvgre_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_NVGRE,
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_nvgre_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_NVGRE,
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_nvgre_4[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_NVGRE,
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_VLAN,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
static enum rte_flow_item_type pattern_mpls_1[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_IPV4,
@@ -240,11 +197,6 @@ static enum rte_flow_item_type pattern_mpls_4[] = {
};
static struct i40e_valid_pattern i40e_supported_patterns[] = {
- /* NVGRE */
- { pattern_nvgre_1, i40e_flow_parse_nvgre_filter },
- { pattern_nvgre_2, i40e_flow_parse_nvgre_filter },
- { pattern_nvgre_3, i40e_flow_parse_nvgre_filter },
- { pattern_nvgre_4, i40e_flow_parse_nvgre_filter },
/* MPLSoUDP & MPLSoGRE */
{ pattern_mpls_1, i40e_flow_parse_mpls_filter },
{ pattern_mpls_2, i40e_flow_parse_mpls_filter },
@@ -750,251 +702,6 @@ i40e_check_tunnel_filter_type(uint8_t filter_type)
return -1;
}
-/* 1. Last in item should be NULL as range is not supported.
- * 2. Supported filter types: IMAC_IVLAN_TENID, IMAC_IVLAN,
- * IMAC_TENID, OMAC_TENID_IMAC and IMAC.
- * 3. Mask of fields which need to be matched should be
- * filled with 1.
- * 4. Mask of fields which needn't to be matched should be
- * filled with 0.
- */
-static int
-i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
- const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct i40e_tunnel_filter_conf *filter)
-{
- const struct rte_flow_item *item = pattern;
- const struct rte_flow_item_eth *eth_spec;
- const struct rte_flow_item_eth *eth_mask;
- const struct rte_flow_item_nvgre *nvgre_spec;
- const struct rte_flow_item_nvgre *nvgre_mask;
- const struct rte_flow_item_vlan *vlan_spec;
- const struct rte_flow_item_vlan *vlan_mask;
- enum rte_flow_item_type item_type;
- uint8_t filter_type = 0;
- bool is_tni_masked = 0;
- uint8_t tni_mask[] = {0xFF, 0xFF, 0xFF};
- bool nvgre_flag = 0;
- uint32_t tenant_id_be = 0;
- int ret;
-
- for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Not support range");
- return -rte_errno;
- }
- item_type = item->type;
- switch (item_type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- eth_spec = item->spec;
- eth_mask = item->mask;
-
- /* Check if ETH item is used for place holder.
- * If yes, both spec and mask should be NULL.
- * If no, both spec and mask shouldn't be NULL.
- */
- if ((!eth_spec && eth_mask) ||
- (eth_spec && !eth_mask)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ether spec/mask");
- return -rte_errno;
- }
-
- if (eth_spec && eth_mask) {
- /* DST address of inner MAC shouldn't be masked.
- * SRC address of Inner MAC should be masked.
- */
- if (!rte_is_broadcast_ether_addr(ð_mask->hdr.dst_addr) ||
- !rte_is_zero_ether_addr(ð_mask->hdr.src_addr) ||
- eth_mask->hdr.ether_type) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ether spec/mask");
- return -rte_errno;
- }
-
- if (!nvgre_flag) {
- rte_memcpy(&filter->outer_mac,
- ð_spec->hdr.dst_addr,
- RTE_ETHER_ADDR_LEN);
- filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
- } else {
- rte_memcpy(&filter->inner_mac,
- ð_spec->hdr.dst_addr,
- RTE_ETHER_ADDR_LEN);
- filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
- }
- }
-
- break;
- case RTE_FLOW_ITEM_TYPE_VLAN:
- vlan_spec = item->spec;
- vlan_mask = item->mask;
- if (!(vlan_spec && vlan_mask) ||
- vlan_mask->hdr.eth_proto) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid vlan item");
- return -rte_errno;
- }
-
- if (vlan_spec && vlan_mask) {
- if (vlan_mask->hdr.vlan_tci ==
- rte_cpu_to_be_16(I40E_VLAN_TCI_MASK))
- filter->inner_vlan =
- rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) &
- I40E_VLAN_TCI_MASK;
- filter_type |= RTE_ETH_TUNNEL_FILTER_IVLAN;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_IPV4:
- filter->ip_type = I40E_TUNNEL_IPTYPE_IPV4;
- /* IPv4 is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv4 item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- filter->ip_type = I40E_TUNNEL_IPTYPE_IPV6;
- /* IPv6 is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv6 item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_NVGRE:
- nvgre_spec = item->spec;
- nvgre_mask = item->mask;
- /* Check if NVGRE item is used to describe protocol.
- * If yes, both spec and mask should be NULL.
- * If no, both spec and mask shouldn't be NULL.
- */
- if ((!nvgre_spec && nvgre_mask) ||
- (nvgre_spec && !nvgre_mask)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid NVGRE item");
- return -rte_errno;
- }
-
- if (nvgre_spec && nvgre_mask) {
- is_tni_masked =
- !!memcmp(nvgre_mask->tni, tni_mask,
- RTE_DIM(tni_mask));
- if (is_tni_masked) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid TNI mask");
- return -rte_errno;
- }
- if (nvgre_mask->protocol &&
- nvgre_mask->protocol != 0xFFFF) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid NVGRE item");
- return -rte_errno;
- }
- if (nvgre_mask->c_k_s_rsvd0_ver &&
- nvgre_mask->c_k_s_rsvd0_ver !=
- rte_cpu_to_be_16(0xFFFF)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid NVGRE item");
- return -rte_errno;
- }
- if (nvgre_spec->c_k_s_rsvd0_ver !=
- rte_cpu_to_be_16(0x2000) &&
- nvgre_mask->c_k_s_rsvd0_ver) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid NVGRE item");
- return -rte_errno;
- }
- if (nvgre_mask->protocol &&
- nvgre_spec->protocol !=
- rte_cpu_to_be_16(0x6558)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid NVGRE item");
- return -rte_errno;
- }
- rte_memcpy(((uint8_t *)&tenant_id_be + 1),
- nvgre_spec->tni, 3);
- filter->tenant_id =
- rte_be_to_cpu_32(tenant_id_be);
- filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
- }
-
- nvgre_flag = 1;
- break;
- default:
- break;
- }
- }
-
- ret = i40e_check_tunnel_filter_type(filter_type);
- if (ret < 0) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL,
- "Invalid filter type");
- return -rte_errno;
- }
- filter->filter_type = filter_type;
-
- filter->tunnel_type = I40E_TUNNEL_TYPE_NVGRE;
-
- return 0;
-}
-
-static int
-i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter)
-{
- struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
- int ret;
-
- ret = i40e_flow_parse_nvgre_pattern(dev, pattern,
- error, tunnel_filter);
- if (ret)
- return ret;
-
- ret = i40e_flow_parse_tunnel_action(dev, actions, error, tunnel_filter);
- if (ret)
- return ret;
-
- filter->type = RTE_ETH_FILTER_TUNNEL;
-
- return ret;
-}
/* 1. Last in item should be NULL as range is not supported.
* 2. Supported filter types: MPLS label.
diff --git a/drivers/net/intel/i40e/i40e_flow.h b/drivers/net/intel/i40e/i40e_flow.h
index 0981b4569a..161fa76b14 100644
--- a/drivers/net/intel/i40e/i40e_flow.h
+++ b/drivers/net/intel/i40e/i40e_flow.h
@@ -19,6 +19,7 @@ enum i40e_flow_engine_type {
I40E_FLOW_ENGINE_TYPE_FDIR,
I40E_FLOW_ENGINE_TYPE_TUNNEL_QINQ,
I40E_FLOW_ENGINE_TYPE_TUNNEL_VXLAN,
+ I40E_FLOW_ENGINE_TYPE_TUNNEL_NVGRE,
};
extern const struct ci_flow_engine_list i40e_flow_engine_list;
@@ -27,5 +28,6 @@ extern const struct ci_flow_engine i40e_flow_engine_ethertype;
extern const struct ci_flow_engine i40e_flow_engine_fdir;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_qinq;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_vxlan;
+extern const struct ci_flow_engine i40e_flow_engine_tunnel_nvgre;
#endif /* _I40E_FLOW_H_ */
diff --git a/drivers/net/intel/i40e/i40e_flow_tunnel.c b/drivers/net/intel/i40e/i40e_flow_tunnel.c
index ec6107dde0..643272a895 100644
--- a/drivers/net/intel/i40e/i40e_flow_tunnel.c
+++ b/drivers/net/intel/i40e/i40e_flow_tunnel.c
@@ -468,6 +468,204 @@ const struct rte_flow_graph i40e_tunnel_vxlan_graph = {
},
};
+/**
+ * NVGRE tunnel filter graph implementation
+ * Pattern: START -> ETH -> (IPv4 | IPv6) -> NVGRE -> ETH -> [VLAN] -> END
+ */
+enum i40e_tunnel_nvgre_node_id {
+ I40E_TUNNEL_NVGRE_NODE_START = RTE_FLOW_NODE_FIRST,
+ I40E_TUNNEL_NVGRE_NODE_OUTER_ETH,
+ I40E_TUNNEL_NVGRE_NODE_IPV4,
+ I40E_TUNNEL_NVGRE_NODE_IPV6,
+ I40E_TUNNEL_NVGRE_NODE_NVGRE,
+ I40E_TUNNEL_NVGRE_NODE_INNER_ETH,
+ I40E_TUNNEL_NVGRE_NODE_INNER_VLAN,
+ I40E_TUNNEL_NVGRE_NODE_END,
+ I40E_TUNNEL_NVGRE_NODE_MAX,
+};
+
+static int
+i40e_tunnel_node_nvgre_validate(const void *ctx __rte_unused,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct rte_flow_item_nvgre *nvgre_spec = item->spec;
+ const struct rte_flow_item_nvgre *nvgre_mask = item->mask;
+
+ /* spec/mask are optional */
+ if (nvgre_spec == NULL && nvgre_mask == NULL)
+ return 0;
+
+ /* TNI must be fully masked */
+ if (!CI_FIELD_IS_MASKED(&nvgre_mask->tni)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
+ "Invalid NVGRE mask");
+ }
+ /* protocol must either be unmasked or fully masked */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&nvgre_mask->protocol)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
+ "Invalid NVGRE mask");
+ }
+ /* reserved/version field must either be unmasked or fully masked */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&nvgre_mask->c_k_s_rsvd0_ver)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
+ "Invalid NVGRE mask");
+ }
+ /* if reserved/version field is masked, it must be set to 0x2000 */
+ if (nvgre_mask->c_k_s_rsvd0_ver &&
+ nvgre_spec->c_k_s_rsvd0_ver != rte_cpu_to_be_16(0x2000)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
+ "Invalid NVGRE spec");
+ }
+ /* if protocol field is masked, it must be set to 0x6558 */
+ if (nvgre_mask->protocol &&
+ nvgre_spec->protocol != rte_cpu_to_be_16(0x6558)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, item,
+ "Invalid NVGRE spec");
+ }
+ return 0;
+}
+
+static int
+i40e_tunnel_node_nvgre_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct rte_flow_item_nvgre *nvgre_spec = item->spec;
+ const struct rte_flow_item_nvgre *nvgre_mask = item->mask;
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ /* spec/mask are optional */
+ if (nvgre_spec == NULL && nvgre_mask == NULL)
+ return 0;
+
+ /* Store the VNI and set filter flag */
+ tunnel_filter->tenant_id = ci_be24_to_cpu(nvgre_spec->tni);
+ tunnel_filter->filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
+
+ return 0;
+}
+
+static int
+i40e_tunnel_nvgre_node_end_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ tunnel_filter->tunnel_type = I40E_TUNNEL_TYPE_NVGRE;
+
+ return 0;
+}
+
+const struct rte_flow_graph i40e_tunnel_nvgre_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [I40E_TUNNEL_NVGRE_NODE_START] = {
+ .name = "START",
+ },
+ [I40E_TUNNEL_NVGRE_NODE_OUTER_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_eth_validate,
+ .process = i40e_tunnel_node_outer_eth_process,
+ },
+ [I40E_TUNNEL_NVGRE_NODE_IPV4] = {
+ .name = "IPv4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_node_ipv4_process,
+ },
+ [I40E_TUNNEL_NVGRE_NODE_IPV6] = {
+ .name = "IPv6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_node_ipv6_process,
+ },
+ [I40E_TUNNEL_NVGRE_NODE_NVGRE] = {
+ .name = "NVGRE",
+ .type = RTE_FLOW_ITEM_TYPE_NVGRE,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_nvgre_validate,
+ .process = i40e_tunnel_node_nvgre_process,
+ },
+ [I40E_TUNNEL_NVGRE_NODE_INNER_ETH] = {
+ .name = "INNER_ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY |
+ RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_eth_validate,
+ .process = i40e_tunnel_inner_eth_process,
+ },
+ [I40E_TUNNEL_NVGRE_NODE_INNER_VLAN] = {
+ .name = "INNER_VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_vlan_validate,
+ .process = i40e_tunnel_node_inner_vlan_process,
+ },
+ [I40E_TUNNEL_NVGRE_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ .validate = i40e_tunnel_node_end_validate,
+ .process = i40e_tunnel_nvgre_node_end_process
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [I40E_TUNNEL_NVGRE_NODE_START] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_NVGRE_NODE_OUTER_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_NVGRE_NODE_OUTER_ETH] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_NVGRE_NODE_IPV4,
+ I40E_TUNNEL_NVGRE_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_NVGRE_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_NVGRE_NODE_NVGRE,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_NVGRE_NODE_IPV6] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_NVGRE_NODE_NVGRE,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_NVGRE_NODE_NVGRE] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_NVGRE_NODE_INNER_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_NVGRE_NODE_INNER_ETH] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_NVGRE_NODE_INNER_VLAN,
+ I40E_TUNNEL_NVGRE_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_NVGRE_NODE_INNER_VLAN] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_NVGRE_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
static int
i40e_tunnel_action_check(const struct ci_flow_actions *actions,
const struct ci_flow_actions_check_param *param,
@@ -630,6 +828,15 @@ const struct ci_flow_engine_ops i40e_flow_engine_tunnel_ops = {
.flow_uninstall = i40e_tunnel_flow_uninstall,
};
+const struct ci_flow_engine i40e_flow_engine_tunnel_nvgre = {
+ .name = "i40e_tunnel_nvgre",
+ .type = I40E_FLOW_ENGINE_TYPE_TUNNEL_NVGRE,
+ .ops = &i40e_flow_engine_tunnel_ops,
+ .ctx_size = sizeof(struct i40e_tunnel_ctx),
+ .flow_size = sizeof(struct i40e_tunnel_flow),
+ .graph = &i40e_tunnel_nvgre_graph,
+};
+
const struct ci_flow_engine i40e_flow_engine_tunnel_vxlan = {
.name = "i40e_tunnel_vxlan",
.type = I40E_FLOW_ENGINE_TYPE_TUNNEL_VXLAN,
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 18/21] net/i40e: reimplement MPLS parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (16 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 17/21] net/i40e: reimplement NVGRE parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 19/21] net/i40e: reimplement gtp parser Anatoly Burakov
` (3 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Use the new flow graph API and the common parsing framework to implement
flow parser for MPLS tunnels.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 195 +---------------------
drivers/net/intel/i40e/i40e_flow.h | 2 +
drivers/net/intel/i40e/i40e_flow_tunnel.c | 174 +++++++++++++++++++
3 files changed, 177 insertions(+), 194 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index c2c9bec9ce..98a0ecbf3c 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -36,6 +36,7 @@ const struct ci_flow_engine_list i40e_flow_engine_list = {
&i40e_flow_engine_tunnel_qinq,
&i40e_flow_engine_tunnel_vxlan,
&i40e_flow_engine_tunnel_nvgre,
+ &i40e_flow_engine_tunnel_mpls,
}
};
@@ -62,11 +63,6 @@ static int i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
struct i40e_tunnel_filter_conf *filter);
-static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter);
static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
const struct rte_flow_item pattern[],
const struct rte_flow_action actions[],
@@ -163,45 +159,7 @@ static enum rte_flow_item_type pattern_fdir_ipv6_gtpu[] = {
RTE_FLOW_ITEM_TYPE_END,
};
-/* Pattern matched tunnel filter */
-static enum rte_flow_item_type pattern_mpls_1[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_MPLS,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_mpls_2[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_MPLS,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_mpls_3[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_GRE,
- RTE_FLOW_ITEM_TYPE_MPLS,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_mpls_4[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_GRE,
- RTE_FLOW_ITEM_TYPE_MPLS,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
static struct i40e_valid_pattern i40e_supported_patterns[] = {
- /* MPLSoUDP & MPLSoGRE */
- { pattern_mpls_1, i40e_flow_parse_mpls_filter },
- { pattern_mpls_2, i40e_flow_parse_mpls_filter },
- { pattern_mpls_3, i40e_flow_parse_mpls_filter },
- { pattern_mpls_4, i40e_flow_parse_mpls_filter },
/* GTP-C & GTP-U */
{ pattern_fdir_ipv4_gtpc, i40e_flow_parse_gtp_filter },
{ pattern_fdir_ipv4_gtpu, i40e_flow_parse_gtp_filter },
@@ -703,157 +661,6 @@ i40e_check_tunnel_filter_type(uint8_t filter_type)
}
-/* 1. Last in item should be NULL as range is not supported.
- * 2. Supported filter types: MPLS label.
- * 3. Mask of fields which need to be matched should be
- * filled with 1.
- * 4. Mask of fields which needn't to be matched should be
- * filled with 0.
- */
-static int
-i40e_flow_parse_mpls_pattern(__rte_unused struct rte_eth_dev *dev,
- const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct i40e_tunnel_filter_conf *filter)
-{
- const struct rte_flow_item *item = pattern;
- const struct rte_flow_item_mpls *mpls_spec;
- const struct rte_flow_item_mpls *mpls_mask;
- enum rte_flow_item_type item_type;
- bool is_mplsoudp = 0; /* 1 - MPLSoUDP, 0 - MPLSoGRE */
- const uint8_t label_mask[3] = {0xFF, 0xFF, 0xF0};
- uint32_t label_be = 0;
-
- for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Not support range");
- return -rte_errno;
- }
- item_type = item->type;
- switch (item_type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ETH item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_IPV4:
- filter->ip_type = I40E_TUNNEL_IPTYPE_IPV4;
- /* IPv4 is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv4 item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- filter->ip_type = I40E_TUNNEL_IPTYPE_IPV6;
- /* IPv6 is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv6 item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- /* UDP is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid UDP item");
- return -rte_errno;
- }
- is_mplsoudp = 1;
- break;
- case RTE_FLOW_ITEM_TYPE_GRE:
- /* GRE is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid GRE item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_MPLS:
- mpls_spec = item->spec;
- mpls_mask = item->mask;
-
- if (!mpls_spec || !mpls_mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid MPLS item");
- return -rte_errno;
- }
-
- if (memcmp(mpls_mask->label_tc_s, label_mask, 3)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid MPLS label mask");
- return -rte_errno;
- }
- rte_memcpy(((uint8_t *)&label_be + 1),
- mpls_spec->label_tc_s, 3);
- filter->tenant_id = rte_be_to_cpu_32(label_be) >> 4;
- break;
- default:
- break;
- }
- }
-
- if (is_mplsoudp)
- filter->tunnel_type = I40E_TUNNEL_TYPE_MPLSoUDP;
- else
- filter->tunnel_type = I40E_TUNNEL_TYPE_MPLSoGRE;
-
- return 0;
-}
-
-static int
-i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter)
-{
- struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
- int ret;
-
- ret = i40e_flow_parse_mpls_pattern(dev, pattern,
- error, tunnel_filter);
- if (ret)
- return ret;
-
- ret = i40e_flow_parse_tunnel_action(dev, actions, error, tunnel_filter);
- if (ret)
- return ret;
-
- filter->type = RTE_ETH_FILTER_TUNNEL;
-
- return ret;
-}
-
/* 1. Last in item should be NULL as range is not supported.
* 2. Supported filter types: GTP TEID.
* 3. Mask of fields which need to be matched should be
diff --git a/drivers/net/intel/i40e/i40e_flow.h b/drivers/net/intel/i40e/i40e_flow.h
index 161fa76b14..55e6b5dbdd 100644
--- a/drivers/net/intel/i40e/i40e_flow.h
+++ b/drivers/net/intel/i40e/i40e_flow.h
@@ -20,6 +20,7 @@ enum i40e_flow_engine_type {
I40E_FLOW_ENGINE_TYPE_TUNNEL_QINQ,
I40E_FLOW_ENGINE_TYPE_TUNNEL_VXLAN,
I40E_FLOW_ENGINE_TYPE_TUNNEL_NVGRE,
+ I40E_FLOW_ENGINE_TYPE_TUNNEL_MPLS,
};
extern const struct ci_flow_engine_list i40e_flow_engine_list;
@@ -29,5 +30,6 @@ extern const struct ci_flow_engine i40e_flow_engine_fdir;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_qinq;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_vxlan;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_nvgre;
+extern const struct ci_flow_engine i40e_flow_engine_tunnel_mpls;
#endif /* _I40E_FLOW_H_ */
diff --git a/drivers/net/intel/i40e/i40e_flow_tunnel.c b/drivers/net/intel/i40e/i40e_flow_tunnel.c
index 643272a895..a7184d2d50 100644
--- a/drivers/net/intel/i40e/i40e_flow_tunnel.c
+++ b/drivers/net/intel/i40e/i40e_flow_tunnel.c
@@ -666,6 +666,171 @@ const struct rte_flow_graph i40e_tunnel_nvgre_graph = {
},
};
+/**
+ * MPLS tunnel filter graph implementation
+ * Pattern: START -> ETH -> (IPv4 | IPv6) -> (UDP | GRE) -> MPLS -> END
+ */
+enum i40e_tunnel_mpls_node_id {
+ I40E_TUNNEL_MPLS_NODE_START = RTE_FLOW_NODE_FIRST,
+ I40E_TUNNEL_MPLS_NODE_ETH,
+ I40E_TUNNEL_MPLS_NODE_IPV4,
+ I40E_TUNNEL_MPLS_NODE_IPV6,
+ I40E_TUNNEL_MPLS_NODE_UDP,
+ I40E_TUNNEL_MPLS_NODE_GRE,
+ I40E_TUNNEL_MPLS_NODE_MPLS,
+ I40E_TUNNEL_MPLS_NODE_END,
+ I40E_TUNNEL_MPLS_NODE_MAX,
+};
+
+static int
+i40e_tunnel_mpls_node_udp_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ tunnel_filter->tunnel_type = I40E_TUNNEL_TYPE_MPLSoUDP;
+
+ return 0;
+}
+
+static int
+i40e_tunnel_mpls_node_gre_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ tunnel_filter->tunnel_type = I40E_TUNNEL_TYPE_MPLSoGRE;
+
+ return 0;
+}
+
+static int
+i40e_tunnel_node_mpls_validate(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_mpls *mpls_mask = item->mask;
+ const uint8_t label_mask[3] = {0xFF, 0xFF, 0xF0};
+
+ /* MPLS label and TC must be fully masked */
+ if (memcmp(mpls_mask->label_tc_s, label_mask, 3)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid MPLS mask");
+ }
+ return 0;
+}
+
+static int
+i40e_tunnel_node_mpls_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct rte_flow_item_mpls *mpls_spec = item->spec;
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ tunnel_filter->tenant_id = ci_be24_to_cpu(mpls_spec->label_tc_s) >> 4;
+
+ return 0;
+}
+
+const struct rte_flow_graph i40e_tunnel_mpls_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [I40E_TUNNEL_MPLS_NODE_START] = {
+ .name = "START",
+ },
+ [I40E_TUNNEL_MPLS_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [I40E_TUNNEL_NVGRE_NODE_IPV4] = {
+ .name = "IPv4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_node_ipv4_process,
+ },
+ [I40E_TUNNEL_NVGRE_NODE_IPV6] = {
+ .name = "IPv6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_node_ipv6_process,
+ },
+ [I40E_TUNNEL_MPLS_NODE_UDP] = {
+ .name = "UDP",
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_mpls_node_udp_process,
+ },
+ [I40E_TUNNEL_MPLS_NODE_GRE] = {
+ .name = "GRE",
+ .type = RTE_FLOW_ITEM_TYPE_GRE,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_mpls_node_gre_process,
+ },
+ [I40E_TUNNEL_MPLS_NODE_MPLS] = {
+ .name = "MPLS",
+ .type = RTE_FLOW_ITEM_TYPE_MPLS,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_mpls_validate,
+ .process = i40e_tunnel_node_mpls_process,
+ },
+ [I40E_TUNNEL_MPLS_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [I40E_TUNNEL_MPLS_NODE_START] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_MPLS_NODE_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_MPLS_NODE_ETH] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_MPLS_NODE_IPV4,
+ I40E_TUNNEL_MPLS_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_MPLS_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_MPLS_NODE_UDP,
+ I40E_TUNNEL_MPLS_NODE_GRE,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_MPLS_NODE_IPV6] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_MPLS_NODE_UDP,
+ I40E_TUNNEL_MPLS_NODE_GRE,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_MPLS_NODE_UDP] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_MPLS_NODE_MPLS,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_MPLS_NODE_GRE] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_MPLS_NODE_MPLS,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_MPLS_NODE_MPLS] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_MPLS_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
static int
i40e_tunnel_action_check(const struct ci_flow_actions *actions,
const struct ci_flow_actions_check_param *param,
@@ -846,6 +1011,15 @@ const struct ci_flow_engine i40e_flow_engine_tunnel_vxlan = {
.graph = &i40e_tunnel_vxlan_graph,
};
+const struct ci_flow_engine i40e_flow_engine_tunnel_mpls = {
+ .name = "i40e_tunnel_mpls",
+ .type = I40E_FLOW_ENGINE_TYPE_TUNNEL_MPLS,
+ .ops = &i40e_flow_engine_tunnel_ops,
+ .ctx_size = sizeof(struct i40e_tunnel_ctx),
+ .flow_size = sizeof(struct i40e_tunnel_flow),
+ .graph = &i40e_tunnel_mpls_graph,
+};
+
const struct ci_flow_engine i40e_flow_engine_tunnel_qinq = {
.name = "i40e_tunnel_qinq",
.type = I40E_FLOW_ENGINE_TYPE_TUNNEL_QINQ,
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 19/21] net/i40e: reimplement gtp parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (17 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 18/21] net/i40e: reimplement MPLS parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 20/21] net/i40e: reimplement L4 cloud parser Anatoly Burakov
` (2 subsequent siblings)
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Use the new flow graph API and the common parsing framework to implement
flow parser for GTP tunnels.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_flow.c | 191 +---------------------
drivers/net/intel/i40e/i40e_flow.h | 2 +
drivers/net/intel/i40e/i40e_flow_tunnel.c | 175 ++++++++++++++++++++
3 files changed, 178 insertions(+), 190 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 98a0ecbf3c..3fff01755e 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -37,6 +37,7 @@ const struct ci_flow_engine_list i40e_flow_engine_list = {
&i40e_flow_engine_tunnel_vxlan,
&i40e_flow_engine_tunnel_nvgre,
&i40e_flow_engine_tunnel_mpls,
+ &i40e_flow_engine_tunnel_gtp,
}
};
@@ -63,11 +64,6 @@ static int i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
struct rte_flow_error *error,
struct i40e_tunnel_filter_conf *filter);
-static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter);
static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
struct i40e_tunnel_filter *filter);
static int i40e_flow_flush_tunnel_filter(struct i40e_pf *pf);
@@ -106,22 +102,6 @@ static enum rte_flow_item_type pattern_fdir_ipv4_sctp[] = {
RTE_FLOW_ITEM_TYPE_END,
};
-static enum rte_flow_item_type pattern_fdir_ipv4_gtpc[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_GTPC,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_gtpu[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_GTPU,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
static enum rte_flow_item_type pattern_fdir_ipv6_udp[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_IPV6,
@@ -143,28 +123,7 @@ static enum rte_flow_item_type pattern_fdir_ipv6_sctp[] = {
RTE_FLOW_ITEM_TYPE_END,
};
-static enum rte_flow_item_type pattern_fdir_ipv6_gtpc[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_GTPC,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_gtpu[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_GTPU,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
static struct i40e_valid_pattern i40e_supported_patterns[] = {
- /* GTP-C & GTP-U */
- { pattern_fdir_ipv4_gtpc, i40e_flow_parse_gtp_filter },
- { pattern_fdir_ipv4_gtpu, i40e_flow_parse_gtp_filter },
- { pattern_fdir_ipv6_gtpc, i40e_flow_parse_gtp_filter },
- { pattern_fdir_ipv6_gtpu, i40e_flow_parse_gtp_filter },
/* L4 over port */
{ pattern_fdir_ipv4_udp, i40e_flow_parse_l4_cloud_filter },
{ pattern_fdir_ipv4_tcp, i40e_flow_parse_l4_cloud_filter },
@@ -661,154 +620,6 @@ i40e_check_tunnel_filter_type(uint8_t filter_type)
}
-/* 1. Last in item should be NULL as range is not supported.
- * 2. Supported filter types: GTP TEID.
- * 3. Mask of fields which need to be matched should be
- * filled with 1.
- * 4. Mask of fields which needn't to be matched should be
- * filled with 0.
- * 5. GTP profile supports GTPv1 only.
- * 6. GTP-C response message ('source_port' = 2123) is not supported.
- */
-static int
-i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev,
- const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct i40e_tunnel_filter_conf *filter)
-{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- const struct rte_flow_item *item = pattern;
- const struct rte_flow_item_gtp *gtp_spec;
- const struct rte_flow_item_gtp *gtp_mask;
- enum rte_flow_item_type item_type;
-
- if (!pf->gtp_support) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "GTP is not supported by default.");
- return -rte_errno;
- }
-
- for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Not support range");
- return -rte_errno;
- }
- item_type = item->type;
- switch (item_type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ETH item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_IPV4:
- filter->ip_type = I40E_TUNNEL_IPTYPE_IPV4;
- /* IPv4 is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv4 item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- filter->ip_type = I40E_TUNNEL_IPTYPE_IPV6;
- /* IPv6 is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv6 item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid UDP item");
- return -rte_errno;
- }
- break;
- case RTE_FLOW_ITEM_TYPE_GTPC:
- case RTE_FLOW_ITEM_TYPE_GTPU:
- gtp_spec = item->spec;
- gtp_mask = item->mask;
-
- if (!gtp_spec || !gtp_mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid GTP item");
- return -rte_errno;
- }
-
- if (gtp_mask->hdr.gtp_hdr_info ||
- gtp_mask->hdr.msg_type ||
- gtp_mask->hdr.plen ||
- gtp_mask->hdr.teid != UINT32_MAX) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid GTP mask");
- return -rte_errno;
- }
-
- if (item_type == RTE_FLOW_ITEM_TYPE_GTPC)
- filter->tunnel_type = I40E_TUNNEL_TYPE_GTPC;
- else if (item_type == RTE_FLOW_ITEM_TYPE_GTPU)
- filter->tunnel_type = I40E_TUNNEL_TYPE_GTPU;
-
- filter->tenant_id = rte_be_to_cpu_32(gtp_spec->hdr.teid);
-
- break;
- default:
- break;
- }
- }
-
- return 0;
-}
-
-static int
-i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter)
-{
- struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
- int ret;
-
- ret = i40e_flow_parse_gtp_pattern(dev, pattern,
- error, tunnel_filter);
- if (ret)
- return ret;
-
- ret = i40e_flow_parse_tunnel_action(dev, actions, error, tunnel_filter);
- if (ret)
- return ret;
-
- filter->type = RTE_ETH_FILTER_TUNNEL;
-
- return ret;
-}
-
-
static int
i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
diff --git a/drivers/net/intel/i40e/i40e_flow.h b/drivers/net/intel/i40e/i40e_flow.h
index 55e6b5dbdd..95eec07373 100644
--- a/drivers/net/intel/i40e/i40e_flow.h
+++ b/drivers/net/intel/i40e/i40e_flow.h
@@ -21,6 +21,7 @@ enum i40e_flow_engine_type {
I40E_FLOW_ENGINE_TYPE_TUNNEL_VXLAN,
I40E_FLOW_ENGINE_TYPE_TUNNEL_NVGRE,
I40E_FLOW_ENGINE_TYPE_TUNNEL_MPLS,
+ I40E_FLOW_ENGINE_TYPE_TUNNEL_GTP,
};
extern const struct ci_flow_engine_list i40e_flow_engine_list;
@@ -31,5 +32,6 @@ extern const struct ci_flow_engine i40e_flow_engine_tunnel_qinq;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_vxlan;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_nvgre;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_mpls;
+extern const struct ci_flow_engine i40e_flow_engine_tunnel_gtp;
#endif /* _I40E_FLOW_H_ */
diff --git a/drivers/net/intel/i40e/i40e_flow_tunnel.c b/drivers/net/intel/i40e/i40e_flow_tunnel.c
index a7184d2d50..1159c4a713 100644
--- a/drivers/net/intel/i40e/i40e_flow_tunnel.c
+++ b/drivers/net/intel/i40e/i40e_flow_tunnel.c
@@ -831,6 +831,172 @@ const struct rte_flow_graph i40e_tunnel_mpls_graph = {
},
};
+/**
+ * GTP tunnel filter graph implementation
+ * Pattern: START -> (IPv4 | IPv6) -> UDP -> (GTPC | GTPU) -> END
+ */
+enum i40e_tunnel_gtp_node_id {
+ I40E_TUNNEL_GTP_NODE_START = RTE_FLOW_NODE_FIRST,
+ I40E_TUNNEL_GTP_NODE_ETH,
+ I40E_TUNNEL_GTP_NODE_IPV4,
+ I40E_TUNNEL_GTP_NODE_IPV6,
+ I40E_TUNNEL_GTP_NODE_UDP,
+ I40E_TUNNEL_GTP_NODE_GTPC,
+ I40E_TUNNEL_GTP_NODE_GTPU,
+ I40E_TUNNEL_GTP_NODE_END,
+ I40E_TUNNEL_GTP_NODE_MAX,
+};
+
+static int
+i40e_tunnel_node_gtp_validate(const void *ctx __rte_unused, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_gtp *gtp_mask = item->mask;
+ const struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ const struct rte_eth_dev *dev = tunnel_ctx->base.dev;
+ const struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* does HW support GTP? */
+ if (!pf->gtp_support) {
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "GTP not supported");
+ }
+
+ /* reject unsupported fields */
+ if (gtp_mask->hdr.gtp_hdr_info ||
+ gtp_mask->hdr.msg_type ||
+ gtp_mask->hdr.plen) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid GTP mask");
+ }
+
+ /* teid must be fully masked */
+ if (!CI_FIELD_IS_MASKED(>p_mask->hdr.teid)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid GTP mask");
+ }
+ return 0;
+}
+
+static int
+i40e_tunnel_node_gtp_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_gtp *gtp_spec = item->spec;
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+
+ if (item->type == RTE_FLOW_ITEM_TYPE_GTPC)
+ tunnel_filter->tunnel_type = I40E_TUNNEL_TYPE_GTPC;
+ else if (item->type == RTE_FLOW_ITEM_TYPE_GTPU)
+ tunnel_filter->tunnel_type = I40E_TUNNEL_TYPE_GTPU;
+ else {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid GTP item type");
+ }
+ tunnel_filter->tenant_id = rte_be_to_cpu_32(gtp_spec->hdr.teid);
+
+ return 0;
+}
+
+const struct rte_flow_graph i40e_tunnel_gtp_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [I40E_TUNNEL_GTP_NODE_START] = {
+ .name = "START",
+ },
+ [I40E_TUNNEL_GTP_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [I40E_TUNNEL_GTP_NODE_IPV4] = {
+ .name = "IPv4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_node_ipv4_process,
+ },
+ [I40E_TUNNEL_GTP_NODE_IPV6] = {
+ .name = "IPv6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_node_ipv6_process,
+ },
+ [I40E_TUNNEL_GTP_NODE_UDP] = {
+ .name = "UDP",
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [I40E_TUNNEL_GTP_NODE_GTPC] = {
+ .name = "GTPC",
+ .type = RTE_FLOW_ITEM_TYPE_GTPC,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_gtp_validate,
+ .process = i40e_tunnel_node_gtp_process,
+ },
+ [I40E_TUNNEL_GTP_NODE_GTPU] = {
+ .name = "GTPU",
+ .type = RTE_FLOW_ITEM_TYPE_GTPU,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_gtp_validate,
+ .process = i40e_tunnel_node_gtp_process,
+ },
+ [I40E_TUNNEL_GTP_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [I40E_TUNNEL_GTP_NODE_START] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_GTP_NODE_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_GTP_NODE_ETH] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_GTP_NODE_IPV4,
+ I40E_TUNNEL_GTP_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_GTP_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_GTP_NODE_UDP,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_GTP_NODE_IPV6] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_GTP_NODE_UDP,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_GTP_NODE_UDP] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_GTP_NODE_GTPC,
+ I40E_TUNNEL_GTP_NODE_GTPU,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_GTP_NODE_GTPC] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_GTP_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_GTP_NODE_GTPU] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_GTP_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
static int
i40e_tunnel_action_check(const struct ci_flow_actions *actions,
const struct ci_flow_actions_check_param *param,
@@ -1020,6 +1186,15 @@ const struct ci_flow_engine i40e_flow_engine_tunnel_mpls = {
.graph = &i40e_tunnel_mpls_graph,
};
+const struct ci_flow_engine i40e_flow_engine_tunnel_gtp = {
+ .name = "i40e_tunnel_gtp",
+ .type = I40E_FLOW_ENGINE_TYPE_TUNNEL_GTP,
+ .ops = &i40e_flow_engine_tunnel_ops,
+ .ctx_size = sizeof(struct i40e_tunnel_ctx),
+ .flow_size = sizeof(struct i40e_tunnel_flow),
+ .graph = &i40e_tunnel_gtp_graph,
+};
+
const struct ci_flow_engine i40e_flow_engine_tunnel_qinq = {
.name = "i40e_tunnel_qinq",
.type = I40E_FLOW_ENGINE_TYPE_TUNNEL_QINQ,
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 20/21] net/i40e: reimplement L4 cloud parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (18 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 19/21] net/i40e: reimplement gtp parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 21/21] net/i40e: reimplement hash parser Anatoly Burakov
2026-03-17 0:42 ` [RFC PATCH v1 00/21] Building a better rte_flow parser Stephen Hemminger
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Use the new flow graph API and the common parsing framework to implement
flow parser for L4 tunnels.
In addition to using the new graph infrastructure, some of the checks were
made more stringent and/or more correct. In particular:
- old code did not check for whether fields other than ports are masked
(they are now rejected)
- old code did not check for whether src/ports are fully masked (masks
other than full are now rejected)
- old code used spec to decide which port to copy (as a result, it was not
possible to match port 0 - this is now allowed)
Because the old parsing infrastructure is no longer needed (hash parser has
always worked outside of it), it has been removed.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 15 +-
drivers/net/intel/i40e/i40e_flow.c | 617 +---------------------
drivers/net/intel/i40e/i40e_flow.h | 3 +-
drivers/net/intel/i40e/i40e_flow_tunnel.c | 305 +++++++++++
4 files changed, 310 insertions(+), 630 deletions(-)
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 7c4786bec0..2503830f22 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1292,23 +1292,10 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
struct i40e_filter_ctx {
- union {
- struct i40e_tunnel_filter_conf consistent_tunnel_filter;
- struct i40e_rte_flow_rss_conf rss_conf;
- };
+ struct i40e_rte_flow_rss_conf rss_conf;
enum rte_filter_type type;
};
-typedef int (*parse_filter_t)(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter);
-struct i40e_valid_pattern {
- enum rte_flow_item_type *items;
- parse_filter_t parse_filter;
-};
-
int i40e_dev_switch_queues(struct i40e_pf *pf, bool on);
int i40e_vsi_release(struct i40e_vsi *vsi);
struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf,
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 3fff01755e..2b4a4dd12c 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -38,6 +38,7 @@ const struct ci_flow_engine_list i40e_flow_engine_list = {
&i40e_flow_engine_tunnel_nvgre,
&i40e_flow_engine_tunnel_mpls,
&i40e_flow_engine_tunnel_gtp,
+ &i40e_flow_engine_tunnel_l4,
}
};
@@ -60,19 +61,7 @@ static int i40e_flow_query(struct rte_eth_dev *dev,
struct rte_flow *flow,
const struct rte_flow_action *actions,
void *data, struct rte_flow_error *error);
-static int i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev,
- const struct rte_flow_action *actions,
- struct rte_flow_error *error,
- struct i40e_tunnel_filter_conf *filter);
-static int i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
- struct i40e_tunnel_filter *filter);
-static int i40e_flow_flush_tunnel_filter(struct i40e_pf *pf);
-static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter);
const struct rte_flow_ops i40e_flow_ops = {
.validate = i40e_flow_validate,
.create = i40e_flow_create,
@@ -81,148 +70,6 @@ const struct rte_flow_ops i40e_flow_ops = {
.query = i40e_flow_query,
};
-static enum rte_flow_item_type pattern_fdir_ipv4_udp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_tcp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv4_sctp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_udp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_tcp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_TCP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static enum rte_flow_item_type pattern_fdir_ipv6_sctp[] = {
- RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV6,
- RTE_FLOW_ITEM_TYPE_SCTP,
- RTE_FLOW_ITEM_TYPE_END,
-};
-
-static struct i40e_valid_pattern i40e_supported_patterns[] = {
- /* L4 over port */
- { pattern_fdir_ipv4_udp, i40e_flow_parse_l4_cloud_filter },
- { pattern_fdir_ipv4_tcp, i40e_flow_parse_l4_cloud_filter },
- { pattern_fdir_ipv4_sctp, i40e_flow_parse_l4_cloud_filter },
- { pattern_fdir_ipv6_udp, i40e_flow_parse_l4_cloud_filter },
- { pattern_fdir_ipv6_tcp, i40e_flow_parse_l4_cloud_filter },
- { pattern_fdir_ipv6_sctp, i40e_flow_parse_l4_cloud_filter },
-};
-
-/* Find the first VOID or non-VOID item pointer */
-static const struct rte_flow_item *
-i40e_find_first_item(const struct rte_flow_item *item, bool is_void)
-{
- bool is_find;
-
- while (item->type != RTE_FLOW_ITEM_TYPE_END) {
- if (is_void)
- is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID;
- else
- is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID;
- if (is_find)
- break;
- item++;
- }
- return item;
-}
-
-/* Skip all VOID items of the pattern */
-static void
-i40e_pattern_skip_void_item(struct rte_flow_item *items,
- const struct rte_flow_item *pattern)
-{
- uint32_t cpy_count = 0;
- const struct rte_flow_item *pb = pattern, *pe = pattern;
-
- for (;;) {
- /* Find a non-void item first */
- pb = i40e_find_first_item(pb, false);
- if (pb->type == RTE_FLOW_ITEM_TYPE_END) {
- pe = pb;
- break;
- }
-
- /* Find a void item */
- pe = i40e_find_first_item(pb + 1, true);
-
- cpy_count = pe - pb;
- rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count);
-
- items += cpy_count;
-
- if (pe->type == RTE_FLOW_ITEM_TYPE_END) {
- pb = pe;
- break;
- }
-
- pb = pe + 1;
- }
- /* Copy the END item. */
- rte_memcpy(items, pe, sizeof(struct rte_flow_item));
-}
-
-/* Check if the pattern matches a supported item type array */
-static bool
-i40e_match_pattern(enum rte_flow_item_type *item_array,
- struct rte_flow_item *pattern)
-{
- struct rte_flow_item *item = pattern;
-
- while ((*item_array == item->type) &&
- (*item_array != RTE_FLOW_ITEM_TYPE_END)) {
- item_array++;
- item++;
- }
-
- return (*item_array == RTE_FLOW_ITEM_TYPE_END &&
- item->type == RTE_FLOW_ITEM_TYPE_END);
-}
-
-/* Find if there's parse filter function matched */
-static parse_filter_t
-i40e_find_parse_filter_func(struct rte_flow_item *pattern, uint32_t *idx)
-{
- parse_filter_t parse_filter = NULL;
- uint8_t i = *idx;
-
- for (; i < RTE_DIM(i40e_supported_patterns); i++) {
- if (i40e_match_pattern(i40e_supported_patterns[i].items,
- pattern)) {
- parse_filter = i40e_supported_patterns[i].parse_filter;
- break;
- }
- }
-
- *idx = ++i;
-
- return parse_filter;
-}
-
int
i40e_get_outer_vlan(struct rte_eth_dev *dev, uint16_t *tpid)
{
@@ -317,309 +164,6 @@ i40e_flow_fdir_get_pctype_value(struct i40e_pf *pf,
return I40E_FILTER_PCTYPE_INVALID;
}
-/* Parse to get the action info of a tunnel filter
- * Tunnel action only supports PF, VF and QUEUE.
- */
-static int
-i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev,
- const struct rte_flow_action *actions,
- struct rte_flow_error *error,
- struct i40e_tunnel_filter_conf *filter)
-{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- const struct rte_flow_action_queue *act_q;
- struct ci_flow_actions parsed_actions = {0};
- struct ci_flow_actions_check_param ac_param = {
- .allowed_types = (enum rte_flow_action_type[]) {
- RTE_FLOW_ACTION_TYPE_QUEUE,
- RTE_FLOW_ACTION_TYPE_PF,
- RTE_FLOW_ACTION_TYPE_VF,
- RTE_FLOW_ACTION_TYPE_END
- },
- .max_actions = 2,
- };
- const struct rte_flow_action *first, *second;
- int ret;
-
- ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error);
- if (ret)
- return ret;
- first = parsed_actions.actions[0];
- /* can be NULL */
- second = parsed_actions.actions[1];
-
- /* first action must be PF or VF */
- if (first->type == RTE_FLOW_ACTION_TYPE_VF) {
- const struct rte_flow_action_vf *vf = first->conf;
- if (vf->id >= pf->vf_num) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, first,
- "Invalid VF ID for tunnel filter");
- return -rte_errno;
- }
- filter->vf_id = vf->id;
- filter->is_to_vf = 1;
- } else if (first->type != RTE_FLOW_ACTION_TYPE_PF) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, first,
- "Unsupported action");
- }
-
- /* check if second action is QUEUE */
- if (second == NULL)
- return 0;
-
- act_q = second->conf;
- /* check queue ID for PF flow */
- if (!filter->is_to_vf && act_q->index >= pf->dev_data->nb_rx_queues) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q,
- "Invalid queue ID for tunnel filter");
- }
- /* check queue ID for VF flow */
- if (filter->is_to_vf && act_q->index >= pf->vf_nb_qps) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q,
- "Invalid queue ID for tunnel filter");
- }
- filter->queue_id = act_q->index;
-
- return 0;
-}
-
-/* 1. Last in item should be NULL as range is not supported.
- * 2. Supported filter types: Source port only and Destination port only.
- * 3. Mask of fields which need to be matched should be
- * filled with 1.
- * 4. Mask of fields which needn't to be matched should be
- * filled with 0.
- */
-static int
-i40e_flow_parse_l4_pattern(const struct rte_flow_item *pattern,
- struct rte_flow_error *error,
- struct i40e_tunnel_filter_conf *filter)
-{
- const struct rte_flow_item_sctp *sctp_spec, *sctp_mask;
- const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
- const struct rte_flow_item_udp *udp_spec, *udp_mask;
- const struct rte_flow_item *item = pattern;
- enum rte_flow_item_type item_type;
-
- for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
- if (item->last) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Not support range");
- return -rte_errno;
- }
- item_type = item->type;
- switch (item_type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid ETH item");
- return -rte_errno;
- }
-
- break;
- case RTE_FLOW_ITEM_TYPE_IPV4:
- filter->ip_type = I40E_TUNNEL_IPTYPE_IPV4;
- /* IPv4 is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv4 item");
- return -rte_errno;
- }
-
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- filter->ip_type = I40E_TUNNEL_IPTYPE_IPV6;
- /* IPv6 is used to describe protocol,
- * spec and mask should be NULL.
- */
- if (item->spec || item->mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid IPv6 item");
- return -rte_errno;
- }
-
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- udp_spec = item->spec;
- udp_mask = item->mask;
-
- if (!udp_spec || !udp_mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid udp item");
- return -rte_errno;
- }
-
- if (udp_spec->hdr.src_port != 0 &&
- udp_spec->hdr.dst_port != 0) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid udp spec");
- return -rte_errno;
- }
-
- if (udp_spec->hdr.src_port != 0) {
- filter->l4_port_type =
- I40E_L4_PORT_TYPE_SRC;
- filter->tenant_id =
- rte_be_to_cpu_32(udp_spec->hdr.src_port);
- }
-
- if (udp_spec->hdr.dst_port != 0) {
- filter->l4_port_type =
- I40E_L4_PORT_TYPE_DST;
- filter->tenant_id =
- rte_be_to_cpu_32(udp_spec->hdr.dst_port);
- }
-
- filter->tunnel_type = I40E_CLOUD_TYPE_UDP;
-
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- tcp_spec = item->spec;
- tcp_mask = item->mask;
-
- if (!tcp_spec || !tcp_mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid tcp item");
- return -rte_errno;
- }
-
- if (tcp_spec->hdr.src_port != 0 &&
- tcp_spec->hdr.dst_port != 0) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid tcp spec");
- return -rte_errno;
- }
-
- if (tcp_spec->hdr.src_port != 0) {
- filter->l4_port_type =
- I40E_L4_PORT_TYPE_SRC;
- filter->tenant_id =
- rte_be_to_cpu_32(tcp_spec->hdr.src_port);
- }
-
- if (tcp_spec->hdr.dst_port != 0) {
- filter->l4_port_type =
- I40E_L4_PORT_TYPE_DST;
- filter->tenant_id =
- rte_be_to_cpu_32(tcp_spec->hdr.dst_port);
- }
-
- filter->tunnel_type = I40E_CLOUD_TYPE_TCP;
-
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- sctp_spec = item->spec;
- sctp_mask = item->mask;
-
- if (!sctp_spec || !sctp_mask) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid sctp item");
- return -rte_errno;
- }
-
- if (sctp_spec->hdr.src_port != 0 &&
- sctp_spec->hdr.dst_port != 0) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Invalid sctp spec");
- return -rte_errno;
- }
-
- if (sctp_spec->hdr.src_port != 0) {
- filter->l4_port_type =
- I40E_L4_PORT_TYPE_SRC;
- filter->tenant_id =
- rte_be_to_cpu_32(sctp_spec->hdr.src_port);
- }
-
- if (sctp_spec->hdr.dst_port != 0) {
- filter->l4_port_type =
- I40E_L4_PORT_TYPE_DST;
- filter->tenant_id =
- rte_be_to_cpu_32(sctp_spec->hdr.dst_port);
- }
-
- filter->tunnel_type = I40E_CLOUD_TYPE_SCTP;
-
- break;
- default:
- break;
- }
- }
-
- return 0;
-}
-
-static int
-i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct i40e_filter_ctx *filter)
-{
- struct i40e_tunnel_filter_conf *tunnel_filter = &filter->consistent_tunnel_filter;
- int ret;
-
- ret = i40e_flow_parse_l4_pattern(pattern, error, tunnel_filter);
- if (ret)
- return ret;
-
- ret = i40e_flow_parse_tunnel_action(dev, actions, error, tunnel_filter);
- if (ret)
- return ret;
-
- filter->type = RTE_ETH_FILTER_TUNNEL;
-
- return ret;
-}
-
-int
-i40e_check_tunnel_filter_type(uint8_t filter_type)
-{
- const uint16_t i40e_supported_tunnel_filter_types[] = {
- RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
- RTE_ETH_TUNNEL_FILTER_IVLAN,
- RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
- RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
- RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
- RTE_ETH_TUNNEL_FILTER_IMAC,
- RTE_ETH_TUNNEL_FILTER_IMAC,
- };
- uint8_t i;
-
- for (i = 0; i < RTE_DIM(i40e_supported_tunnel_filter_types); i++) {
- if (filter_type == i40e_supported_tunnel_filter_types[i])
- return 0;
- }
- return -1;
-}
-
-
static int
i40e_flow_check(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -628,11 +172,6 @@ i40e_flow_check(struct rte_eth_dev *dev,
struct i40e_filter_ctx *filter_ctx,
struct rte_flow_error *error)
{
- struct rte_flow_item *items; /* internal pattern w/o VOID items */
- parse_filter_t parse_filter;
- uint32_t item_num = 0; /* non-void item number of pattern*/
- uint32_t i = 0;
- bool flag = false;
int ret;
ret = ci_flow_check_attr(attr, NULL, error);
@@ -656,52 +195,7 @@ i40e_flow_check(struct rte_eth_dev *dev,
/* try parsing as RSS */
filter_ctx->type = RTE_ETH_FILTER_HASH;
- ret = i40e_hash_parse(dev, pattern, actions, &filter_ctx->rss_conf, error);
- if (!ret) {
- return ret;
- }
-
- i = 0;
- /* Get the non-void item number of pattern */
- while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) {
- if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID)
- item_num++;
- i++;
- }
- item_num++;
- items = calloc(item_num, sizeof(struct rte_flow_item));
- if (items == NULL) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL,
- "No memory for PMD internal items.");
- return -ENOMEM;
- }
-
- i40e_pattern_skip_void_item(items, pattern);
-
- i = 0;
- ret = I40E_NOT_SUPPORTED;
- do {
- parse_filter = i40e_find_parse_filter_func(items, &i);
- if (!parse_filter && !flag) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pattern, "Unsupported pattern");
-
- free(items);
- return -rte_errno;
- }
-
- if (parse_filter)
- ret = parse_filter(dev, items, actions, error, filter_ctx);
-
- flag = true;
- } while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns)));
-
- free(items);
-
- return ret;
+ return i40e_hash_parse(dev, pattern, actions, &filter_ctx->rss_conf, error);
}
static int
@@ -756,14 +250,6 @@ i40e_flow_create(struct rte_eth_dev *dev,
}
switch (filter_ctx.type) {
- case RTE_ETH_FILTER_TUNNEL:
- ret = i40e_dev_consistent_tunnel_filter_set(pf,
- &filter_ctx.consistent_tunnel_filter, 1);
- if (ret)
- goto free_flow;
- flow->rule = TAILQ_LAST(&pf->tunnel.tunnel_list,
- i40e_tunnel_filter_list);
- break;
case RTE_ETH_FILTER_HASH:
ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
if (ret)
@@ -805,10 +291,6 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
return 0;
switch (filter_type) {
- case RTE_ETH_FILTER_TUNNEL:
- ret = i40e_flow_destroy_tunnel_filter(pf,
- (struct i40e_tunnel_filter *)flow->rule);
- break;
case RTE_ETH_FILTER_HASH:
ret = i40e_hash_filter_destroy(pf, flow->rule);
break;
@@ -831,65 +313,6 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
return ret;
}
-static int
-i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
- struct i40e_tunnel_filter *filter)
-{
- struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- struct i40e_vsi *vsi;
- struct i40e_pf_vf *vf;
- struct i40e_aqc_cloud_filters_element_bb cld_filter;
- struct i40e_tunnel_rule *tunnel_rule = &pf->tunnel;
- struct i40e_tunnel_filter *node;
- bool big_buffer = 0;
- int ret = 0;
-
- memset(&cld_filter, 0, sizeof(cld_filter));
- rte_ether_addr_copy((struct rte_ether_addr *)&filter->input.outer_mac,
- (struct rte_ether_addr *)&cld_filter.element.outer_mac);
- rte_ether_addr_copy((struct rte_ether_addr *)&filter->input.inner_mac,
- (struct rte_ether_addr *)&cld_filter.element.inner_mac);
- cld_filter.element.inner_vlan = filter->input.inner_vlan;
- cld_filter.element.flags = filter->input.flags;
- cld_filter.element.tenant_id = filter->input.tenant_id;
- cld_filter.element.queue_number = filter->queue;
- rte_memcpy(cld_filter.general_fields,
- filter->input.general_fields,
- sizeof(cld_filter.general_fields));
-
- if (!filter->is_to_vf)
- vsi = pf->main_vsi;
- else {
- vf = &pf->vfs[filter->vf_id];
- vsi = vf->vsi;
- }
-
- if (((filter->input.flags & I40E_AQC_ADD_CLOUD_FILTER_0X11) ==
- I40E_AQC_ADD_CLOUD_FILTER_0X11) ||
- ((filter->input.flags & I40E_AQC_ADD_CLOUD_FILTER_0X12) ==
- I40E_AQC_ADD_CLOUD_FILTER_0X12) ||
- ((filter->input.flags & I40E_AQC_ADD_CLOUD_FILTER_0X10) ==
- I40E_AQC_ADD_CLOUD_FILTER_0X10))
- big_buffer = 1;
-
- if (big_buffer)
- ret = i40e_aq_rem_cloud_filters_bb(hw, vsi->seid,
- &cld_filter, 1);
- else
- ret = i40e_aq_rem_cloud_filters(hw, vsi->seid,
- &cld_filter.element, 1);
- if (ret < 0)
- return -ENOTSUP;
-
- node = i40e_sw_tunnel_filter_lookup(tunnel_rule, &filter->input);
- if (!node)
- return -EINVAL;
-
- ret = i40e_sw_tunnel_filter_del(pf, &node->input);
-
- return ret;
-}
-
static int
i40e_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
{
@@ -901,14 +324,6 @@ i40e_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
if (ret != 0)
return ret;
- ret = i40e_flow_flush_tunnel_filter(pf);
- if (ret) {
- rte_flow_error_set(error, -ret,
- RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Failed to flush tunnel flows.");
- return -rte_errno;
- }
-
ret = i40e_hash_filter_flush(pf);
if (ret)
rte_flow_error_set(error, -ret,
@@ -917,34 +332,6 @@ i40e_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
return ret;
}
-/* Flush all tunnel filters */
-static int
-i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
-{
- struct i40e_tunnel_filter_list
- *tunnel_list = &pf->tunnel.tunnel_list;
- struct i40e_tunnel_filter *filter;
- struct rte_flow *flow;
- void *temp;
- int ret = 0;
-
- while ((filter = TAILQ_FIRST(tunnel_list))) {
- ret = i40e_flow_destroy_tunnel_filter(pf, filter);
- if (ret)
- return ret;
- }
-
- /* Delete tunnel flows in flow list. */
- RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
- if (flow->filter_type == RTE_ETH_FILTER_TUNNEL) {
- TAILQ_REMOVE(&pf->flow_list, flow, node);
- rte_free(flow);
- }
- }
-
- return ret;
-}
-
static int
i40e_flow_query(struct rte_eth_dev *dev,
struct rte_flow *flow,
diff --git a/drivers/net/intel/i40e/i40e_flow.h b/drivers/net/intel/i40e/i40e_flow.h
index 95eec07373..24683dcff9 100644
--- a/drivers/net/intel/i40e/i40e_flow.h
+++ b/drivers/net/intel/i40e/i40e_flow.h
@@ -12,7 +12,6 @@ uint8_t
i40e_flow_fdir_get_pctype_value(struct i40e_pf *pf,
enum rte_flow_item_type item_type,
struct i40e_fdir_filter_conf *filter);
-int i40e_check_tunnel_filter_type(uint8_t filter_type);
enum i40e_flow_engine_type {
I40E_FLOW_ENGINE_TYPE_ETHERTYPE = 0,
@@ -22,6 +21,7 @@ enum i40e_flow_engine_type {
I40E_FLOW_ENGINE_TYPE_TUNNEL_NVGRE,
I40E_FLOW_ENGINE_TYPE_TUNNEL_MPLS,
I40E_FLOW_ENGINE_TYPE_TUNNEL_GTP,
+ I40E_FLOW_ENGINE_TYPE_TUNNEL_L4,
};
extern const struct ci_flow_engine_list i40e_flow_engine_list;
@@ -33,5 +33,6 @@ extern const struct ci_flow_engine i40e_flow_engine_tunnel_vxlan;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_nvgre;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_mpls;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_gtp;
+extern const struct ci_flow_engine i40e_flow_engine_tunnel_l4;
#endif /* _I40E_FLOW_H_ */
diff --git a/drivers/net/intel/i40e/i40e_flow_tunnel.c b/drivers/net/intel/i40e/i40e_flow_tunnel.c
index 1159c4a713..1aa8677f14 100644
--- a/drivers/net/intel/i40e/i40e_flow_tunnel.c
+++ b/drivers/net/intel/i40e/i40e_flow_tunnel.c
@@ -19,6 +19,27 @@ struct i40e_tunnel_flow {
struct i40e_tunnel_filter_conf filter;
};
+static int
+i40e_check_tunnel_filter_type(uint8_t filter_type)
+{
+ const uint16_t i40e_supported_tunnel_filter_types[] = {
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+ RTE_ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_IVLAN,
+ RTE_ETH_TUNNEL_FILTER_IMAC | RTE_ETH_TUNNEL_FILTER_TENID,
+ RTE_ETH_TUNNEL_FILTER_OMAC | RTE_ETH_TUNNEL_FILTER_TENID |
+ RTE_ETH_TUNNEL_FILTER_IMAC,
+ RTE_ETH_TUNNEL_FILTER_IMAC,
+ };
+ uint8_t i;
+
+ for (i = 0; i < RTE_DIM(i40e_supported_tunnel_filter_types); i++) {
+ if (filter_type == i40e_supported_tunnel_filter_types[i])
+ return 0;
+ }
+ return -1;
+}
+
/**
* QinQ tunnel filter graph implementation
* Pattern: START -> ETH -> OUTER_VLAN -> INNER_VLAN -> END
@@ -997,6 +1018,281 @@ const struct rte_flow_graph i40e_tunnel_gtp_graph = {
},
};
+/**
+ * L4 tunnel filter graph implementation
+ * Pattern: START -> ETH -> (IPv4 | IPv6) -> (TCP | UDP | SCTP) -> END
+ */
+enum i40e_tunnel_l4_node_id {
+ I40E_TUNNEL_L4_NODE_START = RTE_FLOW_NODE_FIRST,
+ I40E_TUNNEL_L4_NODE_ETH,
+ I40E_TUNNEL_L4_NODE_IPV4,
+ I40E_TUNNEL_L4_NODE_IPV6,
+ I40E_TUNNEL_L4_NODE_TCP,
+ I40E_TUNNEL_L4_NODE_UDP,
+ I40E_TUNNEL_L4_NODE_SCTP,
+ I40E_TUNNEL_L4_NODE_END,
+ I40E_TUNNEL_L4_NODE_MAX,
+};
+
+static int
+i40e_tunnel_node_tcp_validate(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_tcp *tcp_mask = item->mask;
+
+ /* only source/destination ports are supported */
+ if (tcp_mask->hdr.sent_seq ||
+ tcp_mask->hdr.recv_ack ||
+ tcp_mask->hdr.data_off ||
+ tcp_mask->hdr.tcp_flags ||
+ tcp_mask->hdr.rx_win ||
+ tcp_mask->hdr.cksum ||
+ tcp_mask->hdr.tcp_urp) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid TCP mask");
+ }
+
+ /* src/dst ports have to be fully masked or fully unmasked */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&tcp_mask->hdr.src_port) !=
+ !CI_FIELD_IS_ZERO_OR_MASKED(&tcp_mask->hdr.dst_port)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid TCP mask");
+ }
+ /* there can be only one! */
+ if (tcp_mask->hdr.src_port && tcp_mask->hdr.dst_port) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid TCP mask");
+ }
+ return 0;
+}
+
+static int
+i40e_tunnel_node_tcp_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+ const struct rte_flow_item_tcp *tcp_spec = item->spec;
+ const struct rte_flow_item_tcp *tcp_mask = item->mask;
+
+ if (tcp_mask->hdr.src_port) {
+ tunnel_filter->l4_port_type = I40E_L4_PORT_TYPE_SRC;
+ tunnel_filter->tenant_id = rte_be_to_cpu_32(tcp_spec->hdr.src_port);
+ } else if (tcp_mask->hdr.dst_port) {
+ tunnel_filter->l4_port_type = I40E_L4_PORT_TYPE_DST;
+ tunnel_filter->tenant_id = rte_be_to_cpu_32(tcp_spec->hdr.dst_port);
+ }
+ tunnel_filter->tunnel_type = I40E_CLOUD_TYPE_TCP;
+
+ return 0;
+}
+
+static int
+i40e_tunnel_node_udp_validate(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_udp *udp_mask = item->mask;
+
+ /* only source/destination ports are supported */
+ if (udp_mask->hdr.dgram_len ||
+ udp_mask->hdr.dgram_cksum) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid UDP mask");
+ }
+
+ /* src/dst ports have to be fully masked or fully unmasked */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&udp_mask->hdr.src_port) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&udp_mask->hdr.dst_port)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid UDP mask");
+ }
+ /* there can be only one! */
+ if (udp_mask->hdr.src_port && udp_mask->hdr.dst_port) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid UDP mask");
+ }
+ return 0;
+}
+
+static int
+i40e_tunnel_node_udp_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+ const struct rte_flow_item_udp *udp_spec = item->spec;
+ const struct rte_flow_item_udp *udp_mask = item->mask;
+
+ if (udp_mask->hdr.src_port) {
+ tunnel_filter->l4_port_type = I40E_L4_PORT_TYPE_SRC;
+ tunnel_filter->tenant_id = rte_be_to_cpu_32(udp_spec->hdr.src_port);
+ } else if (udp_mask->hdr.dst_port) {
+ tunnel_filter->l4_port_type = I40E_L4_PORT_TYPE_DST;
+ tunnel_filter->tenant_id = rte_be_to_cpu_32(udp_spec->hdr.dst_port);
+ }
+ tunnel_filter->tunnel_type = I40E_CLOUD_TYPE_UDP;
+
+ return 0;
+}
+
+static int
+i40e_tunnel_node_sctp_validate(const void *ctx __rte_unused,
+ const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_sctp *sctp_mask = item->mask;
+
+ /* only source/destination ports are supported */
+ if (sctp_mask->hdr.cksum || sctp_mask->hdr.tag) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid SCTP mask");
+ }
+
+ /* src/dst ports have to be fully masked or fully unmasked */
+ if (!CI_FIELD_IS_ZERO_OR_MASKED(&sctp_mask->hdr.src_port) ||
+ !CI_FIELD_IS_ZERO_OR_MASKED(&sctp_mask->hdr.dst_port)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid SCTP mask");
+ }
+ /* there can be only one! */
+ if (sctp_mask->hdr.src_port && sctp_mask->hdr.dst_port) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid SCTP mask");
+ }
+ return 0;
+}
+
+static int
+i40e_tunnel_node_sctp_process(void *ctx, const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_tunnel_ctx *tunnel_ctx = ctx;
+ struct i40e_tunnel_filter_conf *tunnel_filter = &tunnel_ctx->filter;
+ const struct rte_flow_item_sctp *sctp_spec = item->spec;
+ const struct rte_flow_item_sctp *sctp_mask = item->mask;
+
+ if (sctp_mask->hdr.src_port) {
+ tunnel_filter->l4_port_type = I40E_L4_PORT_TYPE_SRC;
+ tunnel_filter->tenant_id = rte_be_to_cpu_32(sctp_spec->hdr.src_port);
+ } else if (sctp_mask->hdr.dst_port) {
+ tunnel_filter->l4_port_type = I40E_L4_PORT_TYPE_DST;
+ tunnel_filter->tenant_id = rte_be_to_cpu_32(sctp_spec->hdr.dst_port);
+ }
+ tunnel_filter->tunnel_type = I40E_CLOUD_TYPE_SCTP;
+
+ return 0;
+}
+
+const struct rte_flow_graph i40e_tunnel_l4_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [I40E_TUNNEL_L4_NODE_START] = {
+ .name = "START",
+ },
+ [I40E_TUNNEL_L4_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [I40E_TUNNEL_L4_NODE_IPV4] = {
+ .name = "IPv4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_node_ipv4_process,
+ },
+ [I40E_TUNNEL_L4_NODE_IPV6] = {
+ .name = "IPv6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_tunnel_node_ipv6_process,
+ },
+ [I40E_TUNNEL_L4_NODE_TCP] = {
+ .name = "TCP",
+ .type = RTE_FLOW_ITEM_TYPE_TCP,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_tcp_validate,
+ .process = i40e_tunnel_node_tcp_process,
+ },
+ [I40E_TUNNEL_L4_NODE_UDP] = {
+ .name = "UDP",
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_udp_validate,
+ .process = i40e_tunnel_node_udp_process,
+ },
+ [I40E_TUNNEL_L4_NODE_SCTP] = {
+ .name = "SCTP",
+ .type = RTE_FLOW_ITEM_TYPE_SCTP,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_tunnel_node_sctp_validate,
+ .process = i40e_tunnel_node_sctp_process,
+ },
+ [I40E_TUNNEL_L4_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [I40E_TUNNEL_L4_NODE_START] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_L4_NODE_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_L4_NODE_ETH] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_L4_NODE_IPV4,
+ I40E_TUNNEL_L4_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_L4_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_L4_NODE_TCP,
+ I40E_TUNNEL_L4_NODE_UDP,
+ I40E_TUNNEL_L4_NODE_SCTP,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_L4_NODE_IPV6] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_L4_NODE_TCP,
+ I40E_TUNNEL_L4_NODE_UDP,
+ I40E_TUNNEL_L4_NODE_SCTP,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_L4_NODE_TCP] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_L4_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_L4_NODE_UDP] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_L4_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_TUNNEL_L4_NODE_SCTP] = {
+ .next = (const size_t[]) {
+ I40E_TUNNEL_L4_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
static int
i40e_tunnel_action_check(const struct ci_flow_actions *actions,
const struct ci_flow_actions_check_param *param,
@@ -1195,6 +1491,15 @@ const struct ci_flow_engine i40e_flow_engine_tunnel_gtp = {
.graph = &i40e_tunnel_gtp_graph,
};
+const struct ci_flow_engine i40e_flow_engine_tunnel_l4 = {
+ .name = "i40e_tunnel_l4",
+ .type = I40E_FLOW_ENGINE_TYPE_TUNNEL_L4,
+ .ops = &i40e_flow_engine_tunnel_ops,
+ .ctx_size = sizeof(struct i40e_tunnel_ctx),
+ .flow_size = sizeof(struct i40e_tunnel_flow),
+ .graph = &i40e_tunnel_l4_graph,
+};
+
const struct ci_flow_engine i40e_flow_engine_tunnel_qinq = {
.name = "i40e_tunnel_qinq",
.type = I40E_FLOW_ENGINE_TYPE_TUNNEL_QINQ,
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [RFC PATCH v1 21/21] net/i40e: reimplement hash parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (19 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 20/21] net/i40e: reimplement L4 cloud parser Anatoly Burakov
@ 2026-03-16 17:27 ` Anatoly Burakov
2026-03-17 0:42 ` [RFC PATCH v1 00/21] Building a better rte_flow parser Stephen Hemminger
21 siblings, 0 replies; 23+ messages in thread
From: Anatoly Burakov @ 2026-03-16 17:27 UTC (permalink / raw)
To: dev, Bruce Richardson
Use the new flow graph API and the common parsing framework to implement
flow parser for RSS.
The RSS parser was bypassing the other generic infrastructure, probably
because it was very convoluted and did not map onto that model very well.
It has now been made a first-class citizen.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/intel/i40e/i40e_ethdev.h | 5 -
drivers/net/intel/i40e/i40e_flow.c | 178 +---
drivers/net/intel/i40e/i40e_flow.h | 6 +
drivers/net/intel/i40e/i40e_flow_hash.c | 1289 +++++++++++++++++++++++
drivers/net/intel/i40e/i40e_hash.c | 980 +----------------
drivers/net/intel/i40e/i40e_hash.h | 8 +-
drivers/net/intel/i40e/meson.build | 1 +
7 files changed, 1308 insertions(+), 1159 deletions(-)
create mode 100644 drivers/net/intel/i40e/i40e_flow_hash.c
diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h
index 2503830f22..d69e671cca 100644
--- a/drivers/net/intel/i40e/i40e_ethdev.h
+++ b/drivers/net/intel/i40e/i40e_ethdev.h
@@ -1291,11 +1291,6 @@ struct i40e_vf_representor {
extern const struct rte_flow_ops i40e_flow_ops;
-struct i40e_filter_ctx {
- struct i40e_rte_flow_rss_conf rss_conf;
- enum rte_filter_type type;
-};
-
int i40e_dev_switch_queues(struct i40e_pf *pf, bool on);
int i40e_vsi_release(struct i40e_vsi *vsi);
struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf,
diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c
index 2b4a4dd12c..a269efdec4 100644
--- a/drivers/net/intel/i40e/i40e_flow.c
+++ b/drivers/net/intel/i40e/i40e_flow.c
@@ -39,6 +39,9 @@ const struct ci_flow_engine_list i40e_flow_engine_list = {
&i40e_flow_engine_tunnel_mpls,
&i40e_flow_engine_tunnel_gtp,
&i40e_flow_engine_tunnel_l4,
+ &i40e_flow_engine_hash_pattern,
+ &i40e_flow_engine_hash_vlan,
+ &i40e_flow_engine_hash_empty,
}
};
@@ -164,40 +167,6 @@ i40e_flow_fdir_get_pctype_value(struct i40e_pf *pf,
return I40E_FILTER_PCTYPE_INVALID;
}
-static int
-i40e_flow_check(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct i40e_filter_ctx *filter_ctx,
- struct rte_flow_error *error)
-{
- int ret;
-
- ret = ci_flow_check_attr(attr, NULL, error);
- if (ret) {
- return ret;
- }
- /* action and pattern validation will happen in each respective engine */
-
- if (!pattern) {
- rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM,
- NULL, "NULL pattern.");
- return -rte_errno;
- }
-
- if (!actions) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_NUM,
- NULL, "NULL action.");
- return -rte_errno;
- }
-
- /* try parsing as RSS */
- filter_ctx->type = RTE_ETH_FILTER_HASH;
- return i40e_hash_parse(dev, pattern, actions, &filter_ctx->rss_conf, error);
-}
-
static int
i40e_flow_validate(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -206,17 +175,9 @@ i40e_flow_validate(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = dev->data->dev_private;
- /* creates dummy context */
- struct i40e_filter_ctx filter_ctx = {0};
- int ret;
- /* try the new engine first */
- ret = ci_flow_validate(&pf->flow_engine_conf, &i40e_flow_engine_list,
+ return ci_flow_validate(&pf->flow_engine_conf, &i40e_flow_engine_list,
attr, pattern, actions, error);
- if (ret == 0)
- return 0;
-
- return i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
}
static struct rte_flow *
@@ -227,52 +188,9 @@ i40e_flow_create(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_filter_ctx filter_ctx = {0};
- struct rte_flow *flow = NULL;
- int ret;
- /* try the new engine first */
- flow = ci_flow_create(&pf->flow_engine_conf, &i40e_flow_engine_list,
+ return ci_flow_create(&pf->flow_engine_conf, &i40e_flow_engine_list,
attr, pattern, actions, error);
- if (flow != NULL)
- return flow;
-
- ret = i40e_flow_check(dev, attr, pattern, actions, &filter_ctx, error);
- if (ret < 0)
- return NULL;
-
- flow = rte_zmalloc("i40e_flow", sizeof(struct rte_flow), 0);
- if (!flow) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Failed to allocate memory");
- return flow;
- }
-
- switch (filter_ctx.type) {
- case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_create(pf, &filter_ctx.rss_conf);
- if (ret)
- goto free_flow;
- flow->rule = TAILQ_LAST(&pf->rss_config_list,
- i40e_rss_conf_list);
- break;
- default:
- goto free_flow;
- }
-
- flow->filter_type = filter_ctx.type;
- TAILQ_INSERT_TAIL(&pf->flow_list, flow, node);
- return flow;
-
-free_flow:
- rte_flow_error_set(error, -ret,
- RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Failed to create flow.");
-
- rte_free(flow);
-
- return NULL;
}
static int
@@ -281,55 +199,17 @@ i40e_flow_destroy(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- enum rte_filter_type filter_type = flow->filter_type;
- int ret = 0;
- /* try the new engine first */
- ret = ci_flow_destroy(&pf->flow_engine_conf, &i40e_flow_engine_list,
+ return ci_flow_destroy(&pf->flow_engine_conf, &i40e_flow_engine_list,
flow, error);
- if (ret == 0)
- return 0;
-
- switch (filter_type) {
- case RTE_ETH_FILTER_HASH:
- ret = i40e_hash_filter_destroy(pf, flow->rule);
- break;
- default:
- PMD_DRV_LOG(WARNING, "Filter type (%d) not supported",
- filter_type);
- ret = -EINVAL;
- break;
- }
-
- if (!ret) {
- TAILQ_REMOVE(&pf->flow_list, flow, node);
- rte_free(flow);
-
- } else
- rte_flow_error_set(error, -ret,
- RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Failed to destroy flow.");
-
- return ret;
}
static int
i40e_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- int ret;
- /* flush the new engine first */
- ret = ci_flow_flush(&pf->flow_engine_conf, &i40e_flow_engine_list, error);
- if (ret != 0)
- return ret;
-
- ret = i40e_hash_filter_flush(pf);
- if (ret)
- rte_flow_error_set(error, -ret,
- RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
- "Failed to flush RSS flows.");
- return ret;
+ return ci_flow_flush(&pf->flow_engine_conf, &i40e_flow_engine_list, error);
}
static int
@@ -338,48 +218,8 @@ i40e_flow_query(struct rte_eth_dev *dev,
const struct rte_flow_action *actions,
void *data, struct rte_flow_error *error)
{
- struct i40e_pf *pf = dev->data->dev_private;
- struct i40e_rss_filter *rss_rule = (struct i40e_rss_filter *)flow->rule;
- enum rte_filter_type filter_type = flow->filter_type;
- struct rte_flow_action_rss *rss_conf = data;
- int ret;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- /* try the new engine first */
- ret = ci_flow_query(&pf->flow_engine_conf, &i40e_flow_engine_list,
+ return ci_flow_query(&pf->flow_engine_conf, &i40e_flow_engine_list,
flow, actions, data, error);
- if (ret == 0)
- return 0;
-
- if (!rss_rule) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_HANDLE,
- NULL, "Invalid rule");
- return -rte_errno;
- }
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- switch (actions->type) {
- case RTE_FLOW_ACTION_TYPE_VOID:
- break;
- case RTE_FLOW_ACTION_TYPE_RSS:
- if (filter_type != RTE_ETH_FILTER_HASH) {
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION,
- actions,
- "action not supported");
- return -rte_errno;
- }
- rte_memcpy(rss_conf,
- &rss_rule->rss_filter_info.conf,
- sizeof(struct rte_flow_action_rss));
- break;
- default:
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION,
- actions,
- "action not supported");
- }
- }
-
- return 0;
}
diff --git a/drivers/net/intel/i40e/i40e_flow.h b/drivers/net/intel/i40e/i40e_flow.h
index 24683dcff9..db4dbceca3 100644
--- a/drivers/net/intel/i40e/i40e_flow.h
+++ b/drivers/net/intel/i40e/i40e_flow.h
@@ -22,6 +22,9 @@ enum i40e_flow_engine_type {
I40E_FLOW_ENGINE_TYPE_TUNNEL_MPLS,
I40E_FLOW_ENGINE_TYPE_TUNNEL_GTP,
I40E_FLOW_ENGINE_TYPE_TUNNEL_L4,
+ I40E_FLOW_ENGINE_TYPE_HASH_PATTERN,
+ I40E_FLOW_ENGINE_TYPE_HASH_VLAN,
+ I40E_FLOW_ENGINE_TYPE_HASH_EMPTY,
};
extern const struct ci_flow_engine_list i40e_flow_engine_list;
@@ -34,5 +37,8 @@ extern const struct ci_flow_engine i40e_flow_engine_tunnel_nvgre;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_mpls;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_gtp;
extern const struct ci_flow_engine i40e_flow_engine_tunnel_l4;
+extern const struct ci_flow_engine i40e_flow_engine_hash_pattern;
+extern const struct ci_flow_engine i40e_flow_engine_hash_vlan;
+extern const struct ci_flow_engine i40e_flow_engine_hash_empty;
#endif /* _I40E_FLOW_H_ */
diff --git a/drivers/net/intel/i40e/i40e_flow_hash.c b/drivers/net/intel/i40e/i40e_flow_hash.c
new file mode 100644
index 0000000000..90ce7c7057
--- /dev/null
+++ b/drivers/net/intel/i40e/i40e_flow_hash.c
@@ -0,0 +1,1289 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include "i40e_ethdev.h"
+#include "i40e_flow.h"
+#include "i40e_hash.h"
+
+#include "../common/flow_engine.h"
+#include "../common/flow_check.h"
+#include "../common/flow_util.h"
+
+struct i40e_hash_ctx {
+ struct ci_flow_engine_ctx base;
+ struct i40e_rte_flow_rss_conf rss_conf;
+ uint32_t pctype;
+ bool customized_ptype;
+};
+
+struct i40e_flow_engine_hash_flow {
+ struct rte_flow base;
+ struct i40e_rte_flow_rss_conf rss_conf;
+};
+
+#define I40E_VLAN_TCI_MASK ((0x7) << 13)
+
+#define I40E_HASH_L4_TYPES (RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+
+#define I40E_HASH_L2_RSS_MASK (RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+ RTE_ETH_RSS_L2_SRC_ONLY | \
+ RTE_ETH_RSS_L2_DST_ONLY)
+
+#define I40E_HASH_L23_RSS_MASK (I40E_HASH_L2_RSS_MASK | \
+ RTE_ETH_RSS_L3_SRC_ONLY | \
+ RTE_ETH_RSS_L3_DST_ONLY)
+
+#define I40E_HASH_IPV4_L23_RSS_MASK (RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK (RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+
+#define I40E_HASH_L234_RSS_MASK (I40E_HASH_L23_RSS_MASK | \
+ RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+ RTE_ETH_RSS_L4_DST_ONLY)
+
+#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
+
+/* Structure of mapping RSS type to input set */
+struct i40e_hash_map_rss_inset {
+ uint64_t rss_type;
+ uint64_t inset;
+};
+
+const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
+ /* IPv4 */
+ { RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+ { RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+
+ { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+ I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+
+ { RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+
+ { RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+
+ { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+ I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
+
+ /* IPv6 */
+ { RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+ { RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+
+ { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+ I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+
+ { RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+
+ { RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+
+ { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+ I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
+
+ /* Port */
+ { RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+
+ /* Ether */
+ { RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+ { RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+
+ /* VLAN */
+ { RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+ { RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+};
+
+static uint64_t
+i40e_hash_get_inset(uint64_t rss_types, bool symmetric_enable)
+{
+ uint64_t mask, inset = 0;
+ int i;
+
+ for (i = 0; i < (int)RTE_DIM(i40e_hash_rss_inset); i++) {
+ if (rss_types & i40e_hash_rss_inset[i].rss_type)
+ inset |= i40e_hash_rss_inset[i].inset;
+ }
+
+ if (!inset)
+ return 0;
+
+ /* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
+ * it is the same case as none of them are added.
+ */
+ mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
+ inset &= ~I40E_INSET_DMAC;
+ else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
+ inset &= ~I40E_INSET_SMAC;
+
+ mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
+ inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
+ else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
+ inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
+
+ mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+ if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
+ inset &= ~I40E_INSET_DST_PORT;
+ else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
+ inset &= ~I40E_INSET_SRC_PORT;
+
+ if (rss_types & I40E_HASH_L4_TYPES) {
+ uint64_t l3_mask = rss_types &
+ (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+ uint64_t l4_mask = rss_types &
+ (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+
+ if (l3_mask && !l4_mask)
+ inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
+ else if (!l3_mask && l4_mask)
+ inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST |
+ I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
+ }
+
+ /* SCTP Verification Tag is not required in hash computation for SYMMETRIC_TOEPLITZ */
+ if (symmetric_enable) {
+ mask = rss_types & RTE_ETH_RSS_NONFRAG_IPV4_SCTP;
+ if (mask == RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
+ inset &= ~I40E_INSET_SCTP_VT;
+
+ mask = rss_types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
+ if (mask == RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+ inset &= ~I40E_INSET_SCTP_VT;
+ }
+
+ return inset;
+}
+
+/*
+ * Hash pattern graph implementation
+ * Pattern: START -> ETH -> [VLAN] -> [VLAN] -> (IPv4|IPv6) -> (TCP|UDP|SCTP|ESP|L2TPV3OIP|AH)
+ * START -> ETH -> [VLAN] -> [VLAN] -> (IPv4|IPv6) frag
+ * START -> ETH -> [VLAN] -> [VLAN] -> (IPv4|IPv6) -> UDP -> (GTPC|ESP|GTPU)
+ * START -> ETH -> [VLAN] -> [VLAN] -> (IPv4|IPv6) -> UDP -> GTPU -> (IPv4|IPv6)
+ */
+enum i40e_hash_pattern_node_id {
+ I40E_HASH_PATTERN_NODE_START = RTE_FLOW_NODE_FIRST,
+ I40E_HASH_PATTERN_NODE_ETH,
+ I40E_HASH_PATTERN_NODE_OUTER_VLAN,
+ I40E_HASH_PATTERN_NODE_INNER_VLAN,
+ I40E_HASH_PATTERN_NODE_IPV4,
+ I40E_HASH_PATTERN_NODE_IPV6,
+ I40E_HASH_PATTERN_NODE_IPV6_FRAG,
+ I40E_HASH_PATTERN_NODE_TCP,
+ I40E_HASH_PATTERN_NODE_UDP,
+ I40E_HASH_PATTERN_NODE_SCTP,
+ I40E_HASH_PATTERN_NODE_ESP,
+ I40E_HASH_PATTERN_NODE_GTPU,
+ I40E_HASH_PATTERN_NODE_GTPC,
+ I40E_HASH_PATTERN_NODE_L2TPV3OIP,
+ I40E_HASH_PATTERN_NODE_AH,
+ I40E_HASH_PATTERN_NODE_INNER_IPV4,
+ I40E_HASH_PATTERN_NODE_INNER_IPV6,
+ I40E_HASH_PATTERN_NODE_END,
+ I40E_HASH_PATTERN_NODE_MAX,
+};
+
+static int
+i40e_hash_pattern_node_eth_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+ hash_ctx->pctype = I40E_FILTER_PCTYPE_L2_PAYLOAD;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_ipv4_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+ /* hash parser does not differentiate between frag and non-frag IPv4 until later */
+ hash_ctx->pctype = I40E_FILTER_PCTYPE_NONF_IPV4_OTHER;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_ipv6_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+ hash_ctx->pctype = I40E_FILTER_PCTYPE_NONF_IPV6_OTHER;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_ipv6_frag_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+ hash_ctx->pctype = I40E_FILTER_PCTYPE_FRAG_IPV6;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_tcp_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+ hash_ctx->pctype = (hash_ctx->pctype == I40E_FILTER_PCTYPE_NONF_IPV4_OTHER) ?
+ I40E_FILTER_PCTYPE_NONF_IPV4_TCP :
+ I40E_FILTER_PCTYPE_NONF_IPV6_TCP;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_udp_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+ hash_ctx->pctype = (hash_ctx->pctype == I40E_FILTER_PCTYPE_NONF_IPV4_OTHER) ?
+ I40E_FILTER_PCTYPE_NONF_IPV4_UDP :
+ I40E_FILTER_PCTYPE_NONF_IPV6_UDP;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_sctp_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+ hash_ctx->pctype = (hash_ctx->pctype == I40E_FILTER_PCTYPE_NONF_IPV4_OTHER) ?
+ I40E_FILTER_PCTYPE_NONF_IPV4_SCTP :
+ I40E_FILTER_PCTYPE_NONF_IPV6_SCTP;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_esp_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+ bool ipv4 = false;
+ bool udp = false;
+
+ /* ESP can be over IP or over UDP */
+ ipv4 = (hash_ctx->pctype == I40E_FILTER_PCTYPE_NONF_IPV4_OTHER ||
+ hash_ctx->pctype == I40E_FILTER_PCTYPE_NONF_IPV4_UDP);
+ udp = (hash_ctx->pctype == I40E_FILTER_PCTYPE_NONF_IPV4_UDP ||
+ hash_ctx->pctype == I40E_FILTER_PCTYPE_NONF_IPV6_UDP);
+ if (udp) {
+ hash_ctx->pctype = ipv4 ? I40E_CUSTOMIZED_ESP_IPV4_UDP : I40E_CUSTOMIZED_ESP_IPV6_UDP;
+ } else {
+ hash_ctx->pctype = ipv4 ? I40E_CUSTOMIZED_ESP_IPV4 : I40E_CUSTOMIZED_ESP_IPV6;
+ }
+ hash_ctx->customized_ptype = true;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_gtpu_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+
+ /* GTPU pctype does not differentiate between IPv4 and IPv6 */
+ hash_ctx->pctype = I40E_CUSTOMIZED_GTPU;
+ hash_ctx->customized_ptype = true;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_gtpc_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+
+ /* GTPC pctype does not differentiate between IPv4 and IPv6 */
+ hash_ctx->pctype = I40E_CUSTOMIZED_GTPC;
+ hash_ctx->customized_ptype = true;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_l2tpv3oip_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+ hash_ctx->pctype = (hash_ctx->pctype == I40E_FILTER_PCTYPE_NONF_IPV4_OTHER) ?
+ I40E_CUSTOMIZED_IPV4_L2TPV3 :
+ I40E_CUSTOMIZED_IPV6_L2TPV3;
+ hash_ctx->customized_ptype = true;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_ah_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+
+ hash_ctx->pctype = (hash_ctx->pctype == I40E_FILTER_PCTYPE_NONF_IPV4_OTHER) ?
+ I40E_CUSTOMIZED_AH_IPV4 :
+ I40E_CUSTOMIZED_AH_IPV6;
+ hash_ctx->customized_ptype = true;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_inner_ipv4_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+
+ /* inner IP patterns are always over GTP-U */
+ hash_ctx->pctype = I40E_CUSTOMIZED_GTPU_IPV4;
+ hash_ctx->customized_ptype = true;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_inner_ipv6_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+
+ /* inner IP patterns are always over GTP-U */
+ hash_ctx->pctype = I40E_CUSTOMIZED_GTPU_IPV6;
+ hash_ctx->customized_ptype = true;
+ return 0;
+}
+
+static int
+i40e_hash_pattern_node_end_process(void *ctx,
+ const struct rte_flow_item *item __rte_unused,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+
+ /* if RSS hash type for IPv4 frag was requested, change pctype */
+ if (hash_ctx->pctype == I40E_FILTER_PCTYPE_NONF_IPV4_OTHER &&
+ (hash_ctx->rss_conf.conf.types & RTE_ETH_RSS_FRAG_IPV4))
+ hash_ctx->pctype = I40E_FILTER_PCTYPE_FRAG_IPV4;
+
+ return 0;
+}
+
+const struct rte_flow_graph i40e_hash_pattern_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [I40E_HASH_PATTERN_NODE_START] = {
+ .name = "START",
+ },
+ [I40E_HASH_PATTERN_NODE_ETH] = {
+ .name = "ETH",
+ .type = RTE_FLOW_ITEM_TYPE_ETH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_eth_process,
+ },
+ [I40E_HASH_PATTERN_NODE_OUTER_VLAN] = {
+ .name = "OUTER_VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [I40E_HASH_PATTERN_NODE_INNER_VLAN] = {
+ .name = "INNER_VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ },
+ [I40E_HASH_PATTERN_NODE_IPV4] = {
+ .name = "IPv4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_ipv4_process,
+ },
+ [I40E_HASH_PATTERN_NODE_IPV6] = {
+ .name = "IPv6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_ipv6_process,
+ },
+ [I40E_HASH_PATTERN_NODE_IPV6_FRAG] = {
+ .name = "IPv6_FRAG",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_ipv6_frag_process,
+ },
+ [I40E_HASH_PATTERN_NODE_TCP] = {
+ .name = "TCP",
+ .type = RTE_FLOW_ITEM_TYPE_TCP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_tcp_process,
+ },
+ [I40E_HASH_PATTERN_NODE_UDP] = {
+ .name = "UDP",
+ .type = RTE_FLOW_ITEM_TYPE_UDP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_udp_process,
+ },
+ [I40E_HASH_PATTERN_NODE_SCTP] = {
+ .name = "SCTP",
+ .type = RTE_FLOW_ITEM_TYPE_SCTP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_sctp_process,
+ },
+ [I40E_HASH_PATTERN_NODE_ESP] = {
+ .name = "ESP",
+ .type = RTE_FLOW_ITEM_TYPE_ESP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_esp_process,
+ },
+ [I40E_HASH_PATTERN_NODE_GTPU] = {
+ .name = "GTPU",
+ .type = RTE_FLOW_ITEM_TYPE_GTPU,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_gtpu_process,
+ },
+ [I40E_HASH_PATTERN_NODE_GTPC] = {
+ .name = "GTPC",
+ .type = RTE_FLOW_ITEM_TYPE_GTPC,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_gtpc_process,
+ },
+ [I40E_HASH_PATTERN_NODE_L2TPV3OIP] = {
+ .name = "L2TPV3OIP",
+ .type = RTE_FLOW_ITEM_TYPE_L2TPV3OIP,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_l2tpv3oip_process,
+ },
+ [I40E_HASH_PATTERN_NODE_AH] = {
+ .name = "AH",
+ .type = RTE_FLOW_ITEM_TYPE_AH,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_ah_process,
+ },
+ [I40E_HASH_PATTERN_NODE_INNER_IPV4] = {
+ .name = "INNER_IPV4",
+ .type = RTE_FLOW_ITEM_TYPE_IPV4,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_inner_ipv4_process,
+ },
+ [I40E_HASH_PATTERN_NODE_INNER_IPV6] = {
+ .name = "INNER_IPV6",
+ .type = RTE_FLOW_ITEM_TYPE_IPV6,
+ .constraints = RTE_FLOW_NODE_EXPECT_EMPTY,
+ .process = i40e_hash_pattern_node_inner_ipv6_process,
+ },
+ [I40E_HASH_PATTERN_NODE_END] = {
+ .name = "END",
+ .type = RTE_FLOW_ITEM_TYPE_END,
+ .process = i40e_hash_pattern_node_end_process,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [I40E_HASH_PATTERN_NODE_START] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_ETH,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_ETH] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_OUTER_VLAN,
+ I40E_HASH_PATTERN_NODE_IPV4,
+ I40E_HASH_PATTERN_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_OUTER_VLAN] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_INNER_VLAN,
+ I40E_HASH_PATTERN_NODE_IPV4,
+ I40E_HASH_PATTERN_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_INNER_VLAN] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_IPV4,
+ I40E_HASH_PATTERN_NODE_IPV6,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_IPV4] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_TCP,
+ I40E_HASH_PATTERN_NODE_UDP,
+ I40E_HASH_PATTERN_NODE_SCTP,
+ I40E_HASH_PATTERN_NODE_ESP,
+ I40E_HASH_PATTERN_NODE_L2TPV3OIP,
+ I40E_HASH_PATTERN_NODE_AH,
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_IPV6] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_IPV6_FRAG,
+ I40E_HASH_PATTERN_NODE_TCP,
+ I40E_HASH_PATTERN_NODE_UDP,
+ I40E_HASH_PATTERN_NODE_SCTP,
+ I40E_HASH_PATTERN_NODE_ESP,
+ I40E_HASH_PATTERN_NODE_L2TPV3OIP,
+ I40E_HASH_PATTERN_NODE_AH,
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_IPV6_FRAG] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_TCP] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_UDP] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_GTPU,
+ I40E_HASH_PATTERN_NODE_GTPC,
+ I40E_HASH_PATTERN_NODE_ESP,
+ I40E_HASH_PATTERN_NODE_END,
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_SCTP] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_ESP] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_GTPU] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_INNER_IPV4,
+ I40E_HASH_PATTERN_NODE_INNER_IPV6,
+ I40E_HASH_PATTERN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_GTPC] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_L2TPV3OIP] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_AH] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_INNER_IPV4] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_PATTERN_NODE_INNER_IPV6] = {
+ .next = (const size_t[]) {
+ I40E_HASH_PATTERN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+/*
+ * Hash VLAN graph implementation
+ * Pattern: START -> VLAN -> END
+ */
+enum i40e_hash_vlan_node_id {
+ I40E_HASH_VLAN_NODE_START = RTE_FLOW_NODE_FIRST,
+ I40E_HASH_VLAN_NODE_VLAN,
+ I40E_HASH_VLAN_NODE_END,
+ I40E_HASH_VLAN_NODE_MAX,
+};
+
+static int
+i40e_hash_node_vlan_validate(const void *ctx __rte_unused, const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_vlan *vlan_mask = item->mask;
+
+ /* for mask, VLAN/TCI must be masked appropriately */
+ if (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) != I40E_VLAN_TCI_MASK) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "Invalid VLAN mask");
+ }
+ return 0;
+}
+
+static int
+i40e_hash_node_vlan_process(void *ctx, const struct rte_flow_item *item,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_hash_ctx *hash_ctx = ctx;
+ const struct rte_flow_item_vlan *vlan_spec = item->spec;
+
+ hash_ctx->rss_conf.region_priority = rte_cpu_to_be_16(vlan_spec->hdr.vlan_tci) >> 13;
+
+ return 0;
+}
+
+const struct rte_flow_graph i40e_hash_vlan_graph = {
+ .nodes = (struct rte_flow_graph_node[]) {
+ [I40E_HASH_VLAN_NODE_START] = {
+ .name = "START",
+ },
+ [I40E_HASH_VLAN_NODE_VLAN] = {
+ .name = "VLAN",
+ .type = RTE_FLOW_ITEM_TYPE_VLAN,
+ .constraints = RTE_FLOW_NODE_EXPECT_SPEC_MASK,
+ .validate = i40e_hash_node_vlan_validate,
+ .process = i40e_hash_node_vlan_process,
+ },
+ },
+ .edges = (struct rte_flow_graph_edge[]) {
+ [I40E_HASH_VLAN_NODE_START] = {
+ .next = (const size_t[]) {
+ I40E_HASH_VLAN_NODE_VLAN,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ [I40E_HASH_VLAN_NODE_VLAN] = {
+ .next = (const size_t[]) {
+ I40E_HASH_VLAN_NODE_END,
+ RTE_FLOW_NODE_EDGE_END
+ }
+ },
+ },
+};
+
+static bool
+i40e_hash_validate_rss_types(uint64_t rss_types)
+{
+ uint64_t type, mask;
+
+ /* Validate L2 */
+ type = RTE_ETH_RSS_ETH & rss_types;
+ mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
+ if (!type && mask)
+ return false;
+
+ /* Validate L3 */
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+ mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
+ if (!type && mask)
+ return false;
+
+ /* Validate L4 */
+ type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+ mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
+ if (!type && mask)
+ return false;
+
+ return true;
+}
+
+static int
+i40e_hash_validate_rss_common(const struct rte_flow_action_rss *rss_act,
+ struct rte_flow_error *error)
+{
+ /* symmetric toeplitz is only supported when a specific pattern is provided */
+ if (rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "Symmetric hash function not supported without specific patterns");
+ }
+
+ /* hash types are not supported for global RSS configuration */
+ if (rss_act->types != 0) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS types not supported without a pattern");
+ }
+
+ /* check RSS key length if it is specified */
+ if (rss_act->key_len != 0 && rss_act->key_len != I40E_RSS_KEY_LEN) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS key length must be 52 bytes");
+ }
+
+ return 0;
+}
+
+static int
+i40e_hash_pattern_rss_check(const struct ci_flow_actions *actions,
+ const struct ci_flow_actions_check_param *param __rte_unused,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf;
+
+ /* queue list is not supported */
+ if (rss_act->queue_num == 0) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS queues not supported when pattern specified");
+ }
+
+ /* disallow unsupported hash functions */
+ switch (rss_act->func) {
+ case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
+ break;
+ default:
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS hash function not supported when pattern specified");
+ }
+
+ if (!i40e_hash_validate_rss_types(rss_act->types))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ rss_act, "RSS types are invalid");
+
+ /* check RSS key length if it is specified */
+ if (rss_act->key_len != 0 && rss_act->key_len != I40E_RSS_KEY_LEN) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS key length must be 52 bytes");
+ }
+
+ return 0;
+}
+
+static int
+i40e_hash_pattern_ctx_parse(const struct rte_flow_action actions[],
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct i40e_hash_ctx *hash_ctx = (struct i40e_hash_ctx *)ctx;
+ struct ci_flow_actions parsed_actions = {0};
+ struct ci_flow_actions_check_param param = {
+ .allowed_types = (enum rte_flow_action_type[]) {
+ RTE_FLOW_ACTION_TYPE_RSS,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .max_actions = 1,
+ .driver_ctx = ctx->dev,
+ .check = i40e_hash_pattern_rss_check,
+ };
+ const struct rte_flow_action_rss *rss_act;
+ int ret;
+
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret != 0)
+ return ret;
+
+ ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error);
+ if (ret != 0)
+ return ret;
+
+ rss_act = parsed_actions.actions[0]->conf;
+
+ hash_ctx->rss_conf.symmetric_enable =
+ rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ;
+ hash_ctx->rss_conf.conf.func = rss_act->func;
+ hash_ctx->rss_conf.conf.types = rss_act->types;
+
+ if (rss_act->key_len != 0) {
+ memcpy(hash_ctx->rss_conf.key, rss_act->key, I40E_RSS_KEY_LEN);
+ hash_ctx->rss_conf.conf.key = hash_ctx->rss_conf.key;
+ hash_ctx->rss_conf.conf.key_len = rss_act->key_len;
+ }
+
+ hash_ctx->rss_conf.inset = i40e_hash_get_inset(rss_act->types,
+ hash_ctx->rss_conf.symmetric_enable);
+
+ return 0;
+}
+
+static uint64_t
+i40e_hash_get_x722_ext_pctypes(uint8_t match_pctype)
+{
+ uint64_t pctypes = 0;
+
+ switch (match_pctype) {
+ case I40E_FILTER_PCTYPE_NONF_IPV4_TCP:
+ pctypes = BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK);
+ break;
+
+ case I40E_FILTER_PCTYPE_NONF_IPV4_UDP:
+ pctypes = BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP) |
+ BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP);
+ break;
+
+ case I40E_FILTER_PCTYPE_NONF_IPV6_TCP:
+ pctypes = BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK);
+ break;
+
+ case I40E_FILTER_PCTYPE_NONF_IPV6_UDP:
+ pctypes = BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) |
+ BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP);
+ break;
+ }
+
+ return pctypes;
+}
+
+static int
+i40e_hash_translate_gtp_inset(struct i40e_rte_flow_rss_conf *rss_conf,
+ struct rte_flow_error *error)
+{
+ if (rss_conf->inset &
+ (I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC |
+ I40E_INSET_DST_PORT | I40E_INSET_SRC_PORT))
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ NULL,
+ "Only support external destination IP");
+
+ if (rss_conf->inset & I40E_INSET_IPV4_DST)
+ rss_conf->inset = (rss_conf->inset & ~I40E_INSET_IPV4_DST) |
+ I40E_INSET_TUNNEL_IPV4_DST;
+
+ if (rss_conf->inset & I40E_INSET_IPV6_DST)
+ rss_conf->inset = (rss_conf->inset & ~I40E_INSET_IPV6_DST) |
+ I40E_INSET_TUNNEL_IPV6_DST;
+
+ return 0;
+}
+
+static int
+i40e_hash_pattern_ctx_validate(struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(ctx->dev->data->dev_private);
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(ctx->dev->data->dev_private);
+ struct i40e_hash_ctx *hash_ctx = (struct i40e_hash_ctx *)ctx;
+ const struct hash_type_to_pattern {
+ uint32_t pctype;
+ uint64_t valid_rss_flags;
+ } valid_pctype_to_pattern[] = {
+ /* Ether */
+ {I40E_FILTER_PCTYPE_L2_PAYLOAD, I40E_HASH_L2_RSS_MASK | RTE_ETH_RSS_L2_PAYLOAD},
+ /* IP */
+ {I40E_FILTER_PCTYPE_NONF_IPV4_OTHER, RTE_ETH_RSS_NONFRAG_IPV4_OTHER | I40E_HASH_IPV4_L23_RSS_MASK},
+ {I40E_FILTER_PCTYPE_NONF_IPV6_OTHER, RTE_ETH_RSS_NONFRAG_IPV6_OTHER | I40E_HASH_IPV6_L23_RSS_MASK},
+ /* IP fragmented */
+ {I40E_FILTER_PCTYPE_FRAG_IPV4, RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK},
+ {I40E_FILTER_PCTYPE_FRAG_IPV6, RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK},
+ /* TCP */
+ {I40E_FILTER_PCTYPE_NONF_IPV4_TCP, RTE_ETH_RSS_NONFRAG_IPV4_TCP | I40E_HASH_IPV4_L234_RSS_MASK},
+ {I40E_FILTER_PCTYPE_NONF_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_TCP | I40E_HASH_IPV6_L234_RSS_MASK},
+ /* UDP */
+ {I40E_FILTER_PCTYPE_NONF_IPV4_UDP, RTE_ETH_RSS_NONFRAG_IPV4_UDP | I40E_HASH_IPV4_L234_RSS_MASK},
+ {I40E_FILTER_PCTYPE_NONF_IPV6_UDP, RTE_ETH_RSS_NONFRAG_IPV6_UDP | I40E_HASH_IPV6_L234_RSS_MASK},
+ /* SCTP */
+ {I40E_FILTER_PCTYPE_NONF_IPV4_SCTP, RTE_ETH_RSS_NONFRAG_IPV4_SCTP | I40E_HASH_IPV4_L234_RSS_MASK},
+ {I40E_FILTER_PCTYPE_NONF_IPV6_SCTP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP | I40E_HASH_IPV6_L234_RSS_MASK},
+ /* AH */
+ {I40E_CUSTOMIZED_AH_IPV4, RTE_ETH_RSS_AH},
+ {I40E_CUSTOMIZED_AH_IPV6, RTE_ETH_RSS_AH},
+ /* L2TPV3 */
+ {I40E_CUSTOMIZED_IPV4_L2TPV3, RTE_ETH_RSS_L2TPV3},
+ {I40E_CUSTOMIZED_IPV6_L2TPV3, RTE_ETH_RSS_L2TPV3},
+ /* ESP */
+ {I40E_CUSTOMIZED_ESP_IPV4, RTE_ETH_RSS_ESP},
+ {I40E_CUSTOMIZED_ESP_IPV6, RTE_ETH_RSS_ESP},
+ {I40E_CUSTOMIZED_ESP_IPV4_UDP, RTE_ETH_RSS_ESP},
+ {I40E_CUSTOMIZED_ESP_IPV6_UDP, RTE_ETH_RSS_ESP},
+ /* GTPC */
+ {I40E_CUSTOMIZED_GTPC, I40E_HASH_IPV4_L234_RSS_MASK},
+ {I40E_CUSTOMIZED_GTPC, I40E_HASH_IPV6_L234_RSS_MASK},
+ /* GTPU */
+ {I40E_CUSTOMIZED_GTPU, I40E_HASH_IPV4_L234_RSS_MASK},
+ {I40E_CUSTOMIZED_GTPU, I40E_HASH_IPV6_L234_RSS_MASK},
+ /* IP over GTPU */
+ {I40E_CUSTOMIZED_GTPU_IPV4, RTE_ETH_RSS_GTPU},
+ {I40E_CUSTOMIZED_GTPU_IPV6, RTE_ETH_RSS_GTPU},
+ };
+ size_t i;
+
+ for (i = 0; i < RTE_DIM(valid_pctype_to_pattern); i++) {
+ uint32_t pctype = valid_pctype_to_pattern[i].pctype;
+ uint64_t flags = valid_pctype_to_pattern[i].valid_rss_flags;
+
+ if (pctype != hash_ctx->pctype)
+ continue;
+
+ /* find if our ptype works with specified RSS types */
+ if ((hash_ctx->rss_conf.conf.types & ~flags) != 0) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, &hash_ctx->rss_conf.conf,
+ "Some RSS types are not supported for the specified pattern");
+ }
+
+ /* for customized pctypes, find if it's supported */
+ if (hash_ctx->customized_ptype) {
+ struct i40e_customized_pctype *ct;
+
+ ct = i40e_find_customized_pctype(pf, pctype);
+
+ if (ct == NULL || !ct->valid) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, &hash_ctx->rss_conf.conf,
+ "Specified pattern is not supported by the device");
+ }
+ hash_ctx->rss_conf.config_pctypes |= BIT_ULL(ct->pctype);
+
+ /* GTPC/GTPU endpoints require special handling */
+ return i40e_hash_translate_gtp_inset(&hash_ctx->rss_conf, error);
+ } else {
+ hash_ctx->rss_conf.config_pctypes |= BIT_ULL(pctype);
+
+ /* X722 needs special handling */
+ if (hw->mac.type == I40E_MAC_X722) {
+ uint64_t types = i40e_hash_get_x722_ext_pctypes(pctype);
+ hash_ctx->rss_conf.config_pctypes |= types;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int
+i40e_hash_queue_region_check(const struct ci_flow_actions *actions,
+ const struct ci_flow_actions_check_param *param,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf;
+ struct rte_eth_dev *dev = param->driver_ctx;
+ const struct i40e_pf *pf;
+ uint64_t hash_queues;
+
+ if (i40e_hash_validate_rss_common(rss_act, error))
+ return -rte_errno;
+
+ RTE_BUILD_BUG_ON(sizeof(hash_queues) != sizeof(pf->hash_enabled_queues));
+
+ /* having RSS key is not supported */
+ if (rss_act->key != NULL) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS key not supported");
+ }
+
+ /* queue region must be specified */
+ if (rss_act->queue_num == 0) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS queues missing");
+ }
+
+ /* queue region must be power of two */
+ if (!rte_is_power_of_2(rss_act->queue_num)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS queue number must be power of two");
+ }
+
+ /* generic checks already filtered out discontiguous/non-unique RSS queues */
+
+ /* queues must not exceed maximum queues per traffic class */
+ if (rss_act->queue[rss_act->queue_num - 1] >= I40E_MAX_Q_PER_TC) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "Invalid RSS queue index");
+ }
+
+ /* queues must be in LUT */
+ pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ hash_queues = (BIT_ULL(rss_act->queue[0] + rss_act->queue_num) - 1) &
+ ~(BIT_ULL(rss_act->queue[0]) - 1);
+
+ if (hash_queues & ~pf->hash_enabled_queues) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ rss_act, "Some queues are not in LUT");
+ }
+
+ return 0;
+}
+
+static int
+i40e_hash_vlan_ctx_parse(const struct rte_flow_action actions[],
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct i40e_hash_ctx *hash_ctx = (struct i40e_hash_ctx *)ctx;
+ struct ci_flow_actions parsed_actions = {0};
+ struct ci_flow_actions_check_param param = {
+ .allowed_types = (enum rte_flow_action_type[]) {
+ RTE_FLOW_ACTION_TYPE_RSS,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .max_actions = 1,
+ .driver_ctx = ctx->dev,
+ .check = i40e_hash_queue_region_check,
+ };
+ const struct rte_flow_action_rss *rss_act;
+ int ret;
+
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret != 0)
+ return ret;
+
+ ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error);
+ if (ret != 0)
+ return ret;
+
+ rss_act = parsed_actions.actions[0]->conf;
+ hash_ctx->rss_conf.conf.func = rss_act->func;
+ hash_ctx->rss_conf.region_queue_num = rss_act->queue_num;
+ hash_ctx->rss_conf.region_queue_start = rss_act->queue[0];
+
+ return 0;
+}
+
+static int
+i40e_hash_queue_list_check(const struct ci_flow_actions *actions,
+ const struct ci_flow_actions_check_param *param,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf;
+ struct rte_eth_dev *dev = param->driver_ctx;
+ struct i40e_pf *pf;
+ struct i40e_hw *hw;
+ uint16_t max_queue;
+ bool has_queue, has_key;
+
+ if (i40e_hash_validate_rss_common(rss_act, error))
+ return -rte_errno;
+
+ has_queue = rss_act->queue != NULL;
+ has_key = rss_act->key != NULL;
+
+ /* if we have queues, we must not have key */
+ if (has_queue && has_key) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "RSS key for queue region is not supported");
+ }
+
+ /* if there are no queues, no further checks needed */
+ if (!has_queue)
+ return 0;
+
+ /* check queue number limits */
+ hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ if (rss_act->queue_num > hw->func_caps.rss_table_size) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ rss_act, "Too many RSS queues");
+ }
+
+ pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
+ max_queue = i40e_pf_calc_configured_queues_num(pf);
+ else
+ max_queue = pf->dev_data->nb_rx_queues;
+
+ max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
+
+ /* we know RSS queues are contiguous so we only need to check last queue */
+ if (rss_act->queue[rss_act->queue_num - 1] >= max_queue) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
+ "Invalid RSS queue");
+ }
+
+ return 0;
+}
+
+static int
+i40e_hash_empty_ctx_parse(const struct rte_flow_action actions[],
+ const struct rte_flow_attr *attr,
+ struct ci_flow_engine_ctx *ctx,
+ struct rte_flow_error *error)
+{
+ struct i40e_hash_ctx *hash_ctx = (struct i40e_hash_ctx *)ctx;
+ struct ci_flow_actions parsed_actions = {0};
+ struct ci_flow_actions_check_param param = {
+ .allowed_types = (enum rte_flow_action_type[]) {
+ RTE_FLOW_ACTION_TYPE_RSS,
+ RTE_FLOW_ACTION_TYPE_END
+ },
+ .max_actions = 1,
+ .driver_ctx = ctx->dev,
+ .check = i40e_hash_queue_list_check,
+ };
+ const struct rte_flow_action_rss *rss_act;
+ int ret;
+
+ ret = ci_flow_check_attr(attr, NULL, error);
+ if (ret != 0)
+ return ret;
+
+ ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error);
+ if (ret != 0)
+ return ret;
+
+ rss_act = parsed_actions.actions[0]->conf;
+ hash_ctx->rss_conf.conf.func = rss_act->func;
+
+ /* if we have queues, copy them */
+ if (rss_act->queue_num > 0) {
+ memcpy(hash_ctx->rss_conf.queue,
+ rss_act->queue,
+ rss_act->queue_num * sizeof(hash_ctx->rss_conf.queue[0]));
+ hash_ctx->rss_conf.conf.queue = hash_ctx->rss_conf.queue;
+ hash_ctx->rss_conf.conf.queue_num = rss_act->queue_num;
+ /* if we have key, copy it */
+ } else if (rss_act->key_len > 0) {
+ memcpy(hash_ctx->rss_conf.key,
+ rss_act->key,
+ rss_act->key_len);
+ hash_ctx->rss_conf.conf.key = hash_ctx->rss_conf.key;
+ hash_ctx->rss_conf.conf.key_len = rss_act->key_len;
+ }
+
+ return 0;
+}
+
+static int
+i40e_hash_ctx_to_flow(const struct ci_flow_engine_ctx *ctx,
+ struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ const struct i40e_hash_ctx *hash_ctx = (const struct i40e_hash_ctx *)ctx;
+ struct i40e_flow_engine_hash_flow *hash_flow = (struct i40e_flow_engine_hash_flow *)flow;
+
+ hash_flow->rss_conf = hash_ctx->rss_conf;
+
+ return 0;
+}
+
+static int
+i40e_hash_flow_install(struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_flow_engine_hash_flow *hash_flow = (struct i40e_flow_engine_hash_flow *)flow;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(flow->dev->data->dev_private);
+ int ret;
+
+ ret = i40e_hash_filter_create(pf, &hash_flow->rss_conf);
+ if (ret < 0) {
+ return rte_flow_error_set(error, -ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, flow,
+ "Failed to create hash filter");
+ }
+ return 0;
+}
+
+static int
+i40e_hash_flow_uninstall(struct ci_flow *flow,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_flow_engine_hash_flow *hash_flow = (struct i40e_flow_engine_hash_flow *)flow;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(flow->dev->data->dev_private);
+ int ret;
+
+ ret = i40e_hash_filter_destroy(pf, &hash_flow->rss_conf);
+ if (ret < 0) {
+ return rte_flow_error_set(error, -ret,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, flow,
+ "Failed to destroy hash filter");
+ }
+ return 0;
+}
+
+static int
+i40e_hash_flow_query(struct ci_flow *flow,
+ const struct rte_flow_action *action,
+ void *data,
+ struct rte_flow_error *error __rte_unused)
+{
+ struct i40e_flow_engine_hash_flow *hash_flow = (struct i40e_flow_engine_hash_flow *)flow;
+ struct i40e_rte_flow_rss_conf *rss_conf = data;
+
+ if (action->type != RTE_FLOW_ACTION_TYPE_RSS) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "Unsupported action for query");
+ }
+
+ memcpy(rss_conf, &hash_flow->rss_conf, sizeof(*rss_conf));
+ return 0;
+}
+
+const struct ci_flow_engine_ops i40e_flow_engine_hash_pattern_ops = {
+ .ctx_parse = i40e_hash_pattern_ctx_parse,
+ .ctx_validate = i40e_hash_pattern_ctx_validate,
+ .ctx_to_flow = i40e_hash_ctx_to_flow,
+ .flow_install = i40e_hash_flow_install,
+ .flow_uninstall = i40e_hash_flow_uninstall,
+ .flow_query = i40e_hash_flow_query,
+};
+
+const struct ci_flow_engine_ops i40e_flow_engine_hash_vlan_ops = {
+ .ctx_parse = i40e_hash_vlan_ctx_parse,
+ .ctx_to_flow = i40e_hash_ctx_to_flow,
+ .flow_install = i40e_hash_flow_install,
+ .flow_uninstall = i40e_hash_flow_uninstall,
+ .flow_query = i40e_hash_flow_query,
+};
+
+const struct ci_flow_engine_ops i40e_flow_engine_hash_empty_ops = {
+ .ctx_parse = i40e_hash_empty_ctx_parse,
+ .ctx_to_flow = i40e_hash_ctx_to_flow,
+ .flow_install = i40e_hash_flow_install,
+ .flow_uninstall = i40e_hash_flow_uninstall,
+ .flow_query = i40e_hash_flow_query,
+};
+
+const struct ci_flow_engine i40e_flow_engine_hash_pattern = {
+ .name = "i40e_hash_pattern",
+ .type = I40E_FLOW_ENGINE_TYPE_HASH_PATTERN,
+ .ops = &i40e_flow_engine_hash_pattern_ops,
+ .graph = &i40e_hash_pattern_graph,
+ .ctx_size = sizeof(struct i40e_hash_ctx),
+ .flow_size = sizeof(struct i40e_flow_engine_hash_flow),
+};
+
+const struct ci_flow_engine i40e_flow_engine_hash_vlan = {
+ .name = "i40e_hash_vlan",
+ .type = I40E_FLOW_ENGINE_TYPE_HASH_VLAN,
+ .ops = &i40e_flow_engine_hash_vlan_ops,
+ .graph = &i40e_hash_vlan_graph,
+ .ctx_size = sizeof(struct i40e_hash_ctx),
+ .flow_size = sizeof(struct i40e_flow_engine_hash_flow),
+};
+
+const struct ci_flow_engine i40e_flow_engine_hash_empty = {
+ .name = "i40e_hash_empty",
+ .type = I40E_FLOW_ENGINE_TYPE_HASH_EMPTY,
+ .ops = &i40e_flow_engine_hash_empty_ops,
+ .ctx_size = sizeof(struct i40e_hash_ctx),
+ .flow_size = sizeof(struct i40e_flow_engine_hash_flow),
+};
diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c
index 4379ef2be8..e416465713 100644
--- a/drivers/net/intel/i40e/i40e_hash.c
+++ b/drivers/net/intel/i40e/i40e_hash.c
@@ -17,224 +17,6 @@
#include "i40e_hash.h"
#include "../common/flow_check.h"
-
-#ifndef BIT
-#define BIT(n) (1UL << (n))
-#endif
-
-#ifndef BIT_ULL
-#define BIT_ULL(n) (1ULL << (n))
-#endif
-
-/* Pattern item headers */
-#define I40E_HASH_HDR_ETH 0x01ULL
-#define I40E_HASH_HDR_IPV4 0x10ULL
-#define I40E_HASH_HDR_IPV6 0x20ULL
-#define I40E_HASH_HDR_IPV6_FRAG 0x40ULL
-#define I40E_HASH_HDR_TCP 0x100ULL
-#define I40E_HASH_HDR_UDP 0x200ULL
-#define I40E_HASH_HDR_SCTP 0x400ULL
-#define I40E_HASH_HDR_ESP 0x10000ULL
-#define I40E_HASH_HDR_L2TPV3 0x20000ULL
-#define I40E_HASH_HDR_AH 0x40000ULL
-#define I40E_HASH_HDR_GTPC 0x100000ULL
-#define I40E_HASH_HDR_GTPU 0x200000ULL
-
-#define I40E_HASH_HDR_INNER_SHIFT 32
-#define I40E_HASH_HDR_IPV4_INNER (I40E_HASH_HDR_IPV4 << \
- I40E_HASH_HDR_INNER_SHIFT)
-#define I40E_HASH_HDR_IPV6_INNER (I40E_HASH_HDR_IPV6 << \
- I40E_HASH_HDR_INNER_SHIFT)
-
-/* ETH */
-#define I40E_PHINT_ETH I40E_HASH_HDR_ETH
-
-/* IPv4 */
-#define I40E_PHINT_IPV4 (I40E_HASH_HDR_ETH | I40E_HASH_HDR_IPV4)
-#define I40E_PHINT_IPV4_TCP (I40E_PHINT_IPV4 | I40E_HASH_HDR_TCP)
-#define I40E_PHINT_IPV4_UDP (I40E_PHINT_IPV4 | I40E_HASH_HDR_UDP)
-#define I40E_PHINT_IPV4_SCTP (I40E_PHINT_IPV4 | I40E_HASH_HDR_SCTP)
-
-/* IPv6 */
-#define I40E_PHINT_IPV6 (I40E_HASH_HDR_ETH | I40E_HASH_HDR_IPV6)
-#define I40E_PHINT_IPV6_FRAG (I40E_PHINT_IPV6 | \
- I40E_HASH_HDR_IPV6_FRAG)
-#define I40E_PHINT_IPV6_TCP (I40E_PHINT_IPV6 | I40E_HASH_HDR_TCP)
-#define I40E_PHINT_IPV6_UDP (I40E_PHINT_IPV6 | I40E_HASH_HDR_UDP)
-#define I40E_PHINT_IPV6_SCTP (I40E_PHINT_IPV6 | I40E_HASH_HDR_SCTP)
-
-/* ESP */
-#define I40E_PHINT_IPV4_ESP (I40E_PHINT_IPV4 | I40E_HASH_HDR_ESP)
-#define I40E_PHINT_IPV6_ESP (I40E_PHINT_IPV6 | I40E_HASH_HDR_ESP)
-#define I40E_PHINT_IPV4_UDP_ESP (I40E_PHINT_IPV4_UDP | \
- I40E_HASH_HDR_ESP)
-#define I40E_PHINT_IPV6_UDP_ESP (I40E_PHINT_IPV6_UDP | \
- I40E_HASH_HDR_ESP)
-
-/* GTPC */
-#define I40E_PHINT_IPV4_GTPC (I40E_PHINT_IPV4_UDP | \
- I40E_HASH_HDR_GTPC)
-#define I40E_PHINT_IPV6_GTPC (I40E_PHINT_IPV6_UDP | \
- I40E_HASH_HDR_GTPC)
-
-/* GTPU */
-#define I40E_PHINT_IPV4_GTPU (I40E_PHINT_IPV4_UDP | \
- I40E_HASH_HDR_GTPU)
-#define I40E_PHINT_IPV4_GTPU_IPV4 (I40E_PHINT_IPV4_GTPU | \
- I40E_HASH_HDR_IPV4_INNER)
-#define I40E_PHINT_IPV4_GTPU_IPV6 (I40E_PHINT_IPV4_GTPU | \
- I40E_HASH_HDR_IPV6_INNER)
-#define I40E_PHINT_IPV6_GTPU (I40E_PHINT_IPV6_UDP | \
- I40E_HASH_HDR_GTPU)
-#define I40E_PHINT_IPV6_GTPU_IPV4 (I40E_PHINT_IPV6_GTPU | \
- I40E_HASH_HDR_IPV4_INNER)
-#define I40E_PHINT_IPV6_GTPU_IPV6 (I40E_PHINT_IPV6_GTPU | \
- I40E_HASH_HDR_IPV6_INNER)
-
-/* L2TPV3 */
-#define I40E_PHINT_IPV4_L2TPV3 (I40E_PHINT_IPV4 | I40E_HASH_HDR_L2TPV3)
-#define I40E_PHINT_IPV6_L2TPV3 (I40E_PHINT_IPV6 | I40E_HASH_HDR_L2TPV3)
-
-/* AH */
-#define I40E_PHINT_IPV4_AH (I40E_PHINT_IPV4 | I40E_HASH_HDR_AH)
-#define I40E_PHINT_IPV6_AH (I40E_PHINT_IPV6 | I40E_HASH_HDR_AH)
-
-/* Structure of mapping RSS type to input set */
-struct i40e_hash_map_rss_inset {
- uint64_t rss_type;
- uint64_t inset;
-};
-
-const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
- /* IPv4 */
- { RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
- { RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
-
- { RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
- I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
-
- { RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
- I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
-
- { RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
- I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
-
- { RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
- I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
-
- /* IPv6 */
- { RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
- { RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
-
- { RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
- I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
-
- { RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
- I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
-
- { RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
- I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
-
- { RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
- I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
-
- /* Port */
- { RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
-
- /* Ether */
- { RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
- { RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
-
- /* VLAN */
- { RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
- { RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
-};
-
-#define I40E_HASH_VOID_NEXT_ALLOW BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
-
-#define I40E_HASH_ETH_NEXT_ALLOW (BIT_ULL(RTE_FLOW_ITEM_TYPE_IPV4) | \
- BIT_ULL(RTE_FLOW_ITEM_TYPE_IPV6) | \
- BIT_ULL(RTE_FLOW_ITEM_TYPE_VLAN))
-
-#define I40E_HASH_IP_NEXT_ALLOW (BIT_ULL(RTE_FLOW_ITEM_TYPE_TCP) | \
- BIT_ULL(RTE_FLOW_ITEM_TYPE_UDP) | \
- BIT_ULL(RTE_FLOW_ITEM_TYPE_SCTP) | \
- BIT_ULL(RTE_FLOW_ITEM_TYPE_ESP) | \
- BIT_ULL(RTE_FLOW_ITEM_TYPE_L2TPV3OIP) |\
- BIT_ULL(RTE_FLOW_ITEM_TYPE_AH))
-
-#define I40E_HASH_IPV6_NEXT_ALLOW (I40E_HASH_IP_NEXT_ALLOW | \
- BIT_ULL(RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT))
-
-#define I40E_HASH_UDP_NEXT_ALLOW (BIT_ULL(RTE_FLOW_ITEM_TYPE_GTPU) | \
- BIT_ULL(RTE_FLOW_ITEM_TYPE_GTPC))
-
-#define I40E_HASH_GTPU_NEXT_ALLOW (BIT_ULL(RTE_FLOW_ITEM_TYPE_IPV4) | \
- BIT_ULL(RTE_FLOW_ITEM_TYPE_IPV6))
-
-static const uint64_t pattern_next_allow_items[] = {
- [RTE_FLOW_ITEM_TYPE_VOID] = I40E_HASH_VOID_NEXT_ALLOW,
- [RTE_FLOW_ITEM_TYPE_ETH] = I40E_HASH_ETH_NEXT_ALLOW,
- [RTE_FLOW_ITEM_TYPE_IPV4] = I40E_HASH_IP_NEXT_ALLOW,
- [RTE_FLOW_ITEM_TYPE_IPV6] = I40E_HASH_IPV6_NEXT_ALLOW,
- [RTE_FLOW_ITEM_TYPE_UDP] = I40E_HASH_UDP_NEXT_ALLOW,
- [RTE_FLOW_ITEM_TYPE_GTPU] = I40E_HASH_GTPU_NEXT_ALLOW,
-};
-
-static const uint64_t pattern_item_header[] = {
- [RTE_FLOW_ITEM_TYPE_ETH] = I40E_HASH_HDR_ETH,
- [RTE_FLOW_ITEM_TYPE_IPV4] = I40E_HASH_HDR_IPV4,
- [RTE_FLOW_ITEM_TYPE_IPV6] = I40E_HASH_HDR_IPV6,
- [RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT] = I40E_HASH_HDR_IPV6_FRAG,
- [RTE_FLOW_ITEM_TYPE_TCP] = I40E_HASH_HDR_TCP,
- [RTE_FLOW_ITEM_TYPE_UDP] = I40E_HASH_HDR_UDP,
- [RTE_FLOW_ITEM_TYPE_SCTP] = I40E_HASH_HDR_SCTP,
- [RTE_FLOW_ITEM_TYPE_ESP] = I40E_HASH_HDR_ESP,
- [RTE_FLOW_ITEM_TYPE_GTPC] = I40E_HASH_HDR_GTPC,
- [RTE_FLOW_ITEM_TYPE_GTPU] = I40E_HASH_HDR_GTPU,
- [RTE_FLOW_ITEM_TYPE_L2TPV3OIP] = I40E_HASH_HDR_L2TPV3,
- [RTE_FLOW_ITEM_TYPE_AH] = I40E_HASH_HDR_AH,
-};
-
-/* Structure of matched pattern */
-struct i40e_hash_match_pattern {
- uint64_t pattern_type;
- uint64_t rss_mask; /* Supported RSS type for this pattern */
- bool custom_pctype_flag;/* true for custom packet type */
- uint8_t pctype;
-};
-
-#define I40E_HASH_MAP_PATTERN(pattern, rss_mask, pctype) { \
- pattern, rss_mask, false, pctype }
-
-#define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
- pattern, rss_mask, true, cus_pctype }
-
-#define I40E_HASH_L2_RSS_MASK (RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
- RTE_ETH_RSS_L2_SRC_ONLY | \
- RTE_ETH_RSS_L2_DST_ONLY)
-
-#define I40E_HASH_L23_RSS_MASK (I40E_HASH_L2_RSS_MASK | \
- RTE_ETH_RSS_L3_SRC_ONLY | \
- RTE_ETH_RSS_L3_DST_ONLY)
-
-#define I40E_HASH_IPV4_L23_RSS_MASK (RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK (RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
-
-#define I40E_HASH_L234_RSS_MASK (I40E_HASH_L23_RSS_MASK | \
- RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
- RTE_ETH_RSS_L4_DST_ONLY)
-
-#define I40E_HASH_IPV4_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK (I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
-
-#define I40E_HASH_L4_TYPES (RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
- RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
- RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
- RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
- RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
- RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-
const uint8_t i40e_rss_key_default[] = {
0x44, 0x39, 0x79, 0x6b,
0xb5, 0x4c, 0x50, 0x23,
@@ -251,395 +33,6 @@ const uint8_t i40e_rss_key_default[] = {
0x81, 0x15, 0x03, 0x66
};
-/* Current supported patterns and RSS types.
- * All items that have the same pattern types are together.
- */
-static const struct i40e_hash_match_pattern match_patterns[] = {
- /* Ether */
- I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
- RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
- I40E_FILTER_PCTYPE_L2_PAYLOAD),
-
- /* IPv4 */
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
- I40E_FILTER_PCTYPE_FRAG_IPV4),
-
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
- RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
- I40E_HASH_IPV4_L23_RSS_MASK,
- I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
-
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
- RTE_ETH_RSS_NONFRAG_IPV4_TCP |
- I40E_HASH_IPV4_L234_RSS_MASK,
- I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
-
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
- RTE_ETH_RSS_NONFRAG_IPV4_UDP |
- I40E_HASH_IPV4_L234_RSS_MASK,
- I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
-
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
- RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
- I40E_HASH_IPV4_L234_RSS_MASK,
- I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
-
- /* IPv6 */
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
- I40E_FILTER_PCTYPE_FRAG_IPV6),
-
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
- RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
- I40E_HASH_IPV6_L23_RSS_MASK,
- I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
-
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_FRAG,
- RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_L23_RSS_MASK,
- I40E_FILTER_PCTYPE_FRAG_IPV6),
-
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
- RTE_ETH_RSS_NONFRAG_IPV6_TCP |
- I40E_HASH_IPV6_L234_RSS_MASK,
- I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
-
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
- RTE_ETH_RSS_NONFRAG_IPV6_UDP |
- I40E_HASH_IPV6_L234_RSS_MASK,
- I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
-
- I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
- RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
- I40E_HASH_IPV6_L234_RSS_MASK,
- I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
-
- /* ESP */
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
- RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
- RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
- RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
- RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
-
- /* GTPC */
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
- I40E_HASH_IPV4_L234_RSS_MASK,
- I40E_CUSTOMIZED_GTPC),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPC,
- I40E_HASH_IPV6_L234_RSS_MASK,
- I40E_CUSTOMIZED_GTPC),
-
- /* GTPU */
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU,
- I40E_HASH_IPV4_L234_RSS_MASK,
- I40E_CUSTOMIZED_GTPU),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
- RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
- RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
- I40E_HASH_IPV6_L234_RSS_MASK,
- I40E_CUSTOMIZED_GTPU),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
- RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
- RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
-
- /* L2TPV3 */
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
- RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
- RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
-
- /* AH */
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
- I40E_CUSTOMIZED_AH_IPV4),
- I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
- I40E_CUSTOMIZED_AH_IPV6),
-};
-
-static int
-i40e_hash_get_pattern_type(const struct rte_flow_item pattern[],
- uint64_t *pattern_types,
- struct rte_flow_error *error)
-{
- const char *message = "Pattern not supported";
- enum rte_flow_item_type prev_item_type = RTE_FLOW_ITEM_TYPE_VOID;
- enum rte_flow_item_type last_item_type = prev_item_type;
- uint64_t item_hdr, pattern_hdrs = 0;
- bool inner_flag = false;
- int vlan_count = 0;
-
- for (; pattern->type != RTE_FLOW_ITEM_TYPE_END; pattern++) {
- if (pattern->type == RTE_FLOW_ITEM_TYPE_VOID)
- continue;
-
- if (pattern->mask || pattern->spec || pattern->last) {
- message = "Header info should not be specified";
- goto not_sup;
- }
-
- /* Check the previous item allows this sub-item. */
- if (prev_item_type >= (enum rte_flow_item_type)
- RTE_DIM(pattern_next_allow_items) ||
- !(pattern_next_allow_items[prev_item_type] &
- BIT_ULL(pattern->type)))
- goto not_sup;
-
- /* For VLAN item, it does no matter about to pattern type
- * recognition. So just count the number of VLAN and do not
- * change the value of variable `prev_item_type`.
- */
- last_item_type = pattern->type;
- if (last_item_type == RTE_FLOW_ITEM_TYPE_VLAN) {
- if (vlan_count >= 2)
- goto not_sup;
- vlan_count++;
- continue;
- }
-
- prev_item_type = last_item_type;
- if (last_item_type >= (enum rte_flow_item_type)
- RTE_DIM(pattern_item_header))
- goto not_sup;
-
- item_hdr = pattern_item_header[last_item_type];
- assert(item_hdr);
-
- if (inner_flag) {
- item_hdr <<= I40E_HASH_HDR_INNER_SHIFT;
-
- /* Inner layer should not have GTPU item */
- if (last_item_type == RTE_FLOW_ITEM_TYPE_GTPU)
- goto not_sup;
- } else {
- if (last_item_type == RTE_FLOW_ITEM_TYPE_GTPU) {
- inner_flag = true;
- vlan_count = 0;
- }
- }
-
- if (item_hdr & pattern_hdrs)
- goto not_sup;
-
- pattern_hdrs |= item_hdr;
- }
-
- if (pattern_hdrs && last_item_type != RTE_FLOW_ITEM_TYPE_VLAN) {
- *pattern_types = pattern_hdrs;
- return 0;
- }
-
-not_sup:
- return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
- pattern, message);
-}
-
-static uint64_t
-i40e_hash_get_x722_ext_pctypes(uint8_t match_pctype)
-{
- uint64_t pctypes = 0;
-
- switch (match_pctype) {
- case I40E_FILTER_PCTYPE_NONF_IPV4_TCP:
- pctypes = BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK);
- break;
-
- case I40E_FILTER_PCTYPE_NONF_IPV4_UDP:
- pctypes = BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP) |
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP);
- break;
-
- case I40E_FILTER_PCTYPE_NONF_IPV6_TCP:
- pctypes = BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK);
- break;
-
- case I40E_FILTER_PCTYPE_NONF_IPV6_UDP:
- pctypes = BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) |
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP);
- break;
- }
-
- return pctypes;
-}
-
-static int
-i40e_hash_translate_gtp_inset(struct i40e_rte_flow_rss_conf *rss_conf,
- struct rte_flow_error *error)
-{
- if (rss_conf->inset &
- (I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC |
- I40E_INSET_DST_PORT | I40E_INSET_SRC_PORT))
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- NULL,
- "Only support external destination IP");
-
- if (rss_conf->inset & I40E_INSET_IPV4_DST)
- rss_conf->inset = (rss_conf->inset & ~I40E_INSET_IPV4_DST) |
- I40E_INSET_TUNNEL_IPV4_DST;
-
- if (rss_conf->inset & I40E_INSET_IPV6_DST)
- rss_conf->inset = (rss_conf->inset & ~I40E_INSET_IPV6_DST) |
- I40E_INSET_TUNNEL_IPV6_DST;
-
- return 0;
-}
-
-static int
-i40e_hash_get_pctypes(const struct rte_eth_dev *dev,
- const struct i40e_hash_match_pattern *match,
- struct i40e_rte_flow_rss_conf *rss_conf,
- struct rte_flow_error *error)
-{
- if (match->custom_pctype_flag) {
- struct i40e_pf *pf;
- struct i40e_customized_pctype *custom_type;
-
- pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- custom_type = i40e_find_customized_pctype(pf, match->pctype);
- if (!custom_type || !custom_type->valid)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL, "PCTYPE not supported");
-
- rss_conf->config_pctypes |= BIT_ULL(custom_type->pctype);
-
- if (match->pctype == I40E_CUSTOMIZED_GTPU ||
- match->pctype == I40E_CUSTOMIZED_GTPC)
- return i40e_hash_translate_gtp_inset(rss_conf, error);
- } else {
- struct i40e_hw *hw =
- I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- uint64_t types;
-
- rss_conf->config_pctypes |= BIT_ULL(match->pctype);
- if (hw->mac.type == I40E_MAC_X722) {
- types = i40e_hash_get_x722_ext_pctypes(match->pctype);
- rss_conf->config_pctypes |= types;
- }
- }
-
- return 0;
-}
-
-static int
-i40e_hash_get_pattern_pctypes(const struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action_rss *rss_act,
- struct i40e_rte_flow_rss_conf *rss_conf,
- struct rte_flow_error *error)
-{
- uint64_t pattern_types = 0;
- bool match_flag = false;
- int i, ret;
-
- ret = i40e_hash_get_pattern_type(pattern, &pattern_types, error);
- if (ret)
- return ret;
-
- for (i = 0; i < (int)RTE_DIM(match_patterns); i++) {
- const struct i40e_hash_match_pattern *match =
- &match_patterns[i];
-
- /* Check pattern types match. All items that have the same
- * pattern types are together, so if the pattern types match
- * previous item but they doesn't match current item, it means
- * the pattern types do not match all remain items.
- */
- if (pattern_types != match->pattern_type) {
- if (match_flag)
- break;
- continue;
- }
- match_flag = true;
-
- /* Check RSS types match */
- if (!(rss_act->types & ~match->rss_mask)) {
- ret = i40e_hash_get_pctypes(dev, match,
- rss_conf, error);
- if (ret)
- return ret;
- }
- }
-
- if (rss_conf->config_pctypes)
- return 0;
-
- if (match_flag)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- NULL, "RSS types not supported");
-
- return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
- NULL, "Pattern not supported");
-}
-
-static uint64_t
-i40e_hash_get_inset(uint64_t rss_types, bool symmetric_enable)
-{
- uint64_t mask, inset = 0;
- int i;
-
- for (i = 0; i < (int)RTE_DIM(i40e_hash_rss_inset); i++) {
- if (rss_types & i40e_hash_rss_inset[i].rss_type)
- inset |= i40e_hash_rss_inset[i].inset;
- }
-
- if (!inset)
- return 0;
-
- /* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
- * it is the same case as none of them are added.
- */
- mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
- if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
- inset &= ~I40E_INSET_DMAC;
- else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
- inset &= ~I40E_INSET_SMAC;
-
- mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
- if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
- inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
- else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
- inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
-
- mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
- if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
- inset &= ~I40E_INSET_DST_PORT;
- else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
- inset &= ~I40E_INSET_SRC_PORT;
-
- if (rss_types & I40E_HASH_L4_TYPES) {
- uint64_t l3_mask = rss_types &
- (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
- uint64_t l4_mask = rss_types &
- (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
-
- if (l3_mask && !l4_mask)
- inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
- else if (!l3_mask && l4_mask)
- inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST |
- I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
- }
-
- /* SCTP Verification Tag is not required in hash computation for SYMMETRIC_TOEPLITZ */
- if (symmetric_enable) {
- mask = rss_types & RTE_ETH_RSS_NONFRAG_IPV4_SCTP;
- if (mask == RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
- inset &= ~I40E_INSET_SCTP_VT;
-
- mask = rss_types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
- if (mask == RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
- inset &= ~I40E_INSET_SCTP_VT;
- }
-
- return inset;
-}
-
static int
i40e_hash_config_func(struct i40e_hw *hw, enum rte_eth_hash_function func)
{
@@ -921,375 +314,6 @@ i40e_hash_config(struct i40e_pf *pf,
return 0;
}
-static void
-i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act,
- struct i40e_rte_flow_rss_conf *rss_conf)
-{
- const uint8_t *key = rss_act->key;
-
- if (key == NULL) {
- memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key));
- } else {
- memcpy(rss_conf->key, key, sizeof(rss_conf->key));
- }
-
- rss_conf->conf.key = rss_conf->key;
- rss_conf->conf.key_len = sizeof(rss_conf->key);
-}
-
-static int
-i40e_hash_parse_pattern_act(const struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action_rss *rss_act,
- struct i40e_rte_flow_rss_conf *rss_conf,
- struct rte_flow_error *error)
-{
- rss_conf->symmetric_enable = rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ;
-
- if (rss_act->key_len)
- i40e_hash_parse_key(rss_act, rss_conf);
-
- rss_conf->conf.func = rss_act->func;
- rss_conf->conf.types = rss_act->types;
- rss_conf->inset = i40e_hash_get_inset(rss_act->types, rss_conf->symmetric_enable);
-
- return i40e_hash_get_pattern_pctypes(dev, pattern, rss_act,
- rss_conf, error);
-}
-
-static int
-i40e_hash_parse_queues(const struct rte_flow_action_rss *rss_act,
- struct i40e_rte_flow_rss_conf *rss_conf)
-{
- memcpy(rss_conf->queue, rss_act->queue,
- rss_act->queue_num * sizeof(rss_conf->queue[0]));
- rss_conf->conf.queue = rss_conf->queue;
- rss_conf->conf.queue_num = rss_act->queue_num;
- return 0;
-}
-
-static int
-i40e_hash_parse_queue_region(const struct rte_flow_item pattern[],
- const struct rte_flow_action_rss *rss_act,
- struct i40e_rte_flow_rss_conf *rss_conf,
- struct rte_flow_error *error)
-{
- const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
-
- vlan_spec = pattern->spec;
- vlan_mask = pattern->mask;
-
- /* VLAN must have spec and mask */
- if (vlan_spec == NULL || vlan_mask == NULL) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, &pattern[0],
- "VLAN pattern spec and mask required");
- }
- /* for mask, VLAN/TCI must be masked appropriately */
- if ((rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 0x7) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, &pattern[0],
- "VLAN pattern mask invalid");
- }
-
- /* Use a 64 bit variable to represent all queues in a region. */
- RTE_BUILD_BUG_ON(I40E_MAX_Q_PER_TC > 64);
-
- rss_conf->region_queue_num = (uint8_t)rss_act->queue_num;
- rss_conf->region_queue_start = rss_act->queue[0];
- rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13;
- return 0;
-}
-
-static bool
-i40e_hash_validate_rss_types(uint64_t rss_types)
-{
- uint64_t type, mask;
-
- /* Validate L2 */
- type = RTE_ETH_RSS_ETH & rss_types;
- mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
- if (!type && mask)
- return false;
-
- /* Validate L3 */
- type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
- RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
- RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
- mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
- if (!type && mask)
- return false;
-
- /* Validate L4 */
- type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
- mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
- if (!type && mask)
- return false;
-
- return true;
-}
-
-static int
-i40e_hash_validate_rss_pattern(const struct ci_flow_actions *actions,
- const struct ci_flow_actions_check_param *param __rte_unused,
- struct rte_flow_error *error)
-{
- const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf;
-
- /* queue list is not supported */
- if (rss_act->queue_num == 0) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS queues not supported when pattern specified");
- }
-
- /* disallow unsupported hash functions */
- switch (rss_act->func) {
- case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
- case RTE_ETH_HASH_FUNCTION_DEFAULT:
- case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
- case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
- break;
- default:
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS hash function not supported when pattern specified");
- }
-
- if (!i40e_hash_validate_rss_types(rss_act->types))
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- rss_act, "RSS types are invalid");
-
- /* check RSS key length if it is specified */
- if (rss_act->key_len != 0 && rss_act->key_len != I40E_RSS_KEY_LEN) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS key length must be 52 bytes");
- }
-
- return 0;
-}
-
-static int
-i40e_hash_validate_rss_common(const struct rte_flow_action_rss *rss_act,
- struct rte_flow_error *error)
-{
- /* for empty patterns, symmetric toeplitz is not supported */
- if (rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "Symmetric hash function not supported without specific patterns");
- }
-
- /* hash types are not supported for global RSS configuration */
- if (rss_act->types != 0) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS types not supported without a pattern");
- }
-
- /* check RSS key length if it is specified */
- if (rss_act->key_len != 0 && rss_act->key_len != I40E_RSS_KEY_LEN) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS key length must be 52 bytes");
- }
-
- return 0;
-}
-
-static int
-i40e_hash_validate_queue_region(const struct ci_flow_actions *actions,
- const struct ci_flow_actions_check_param *param,
- struct rte_flow_error *error)
-{
- const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf;
- struct rte_eth_dev *dev = param->driver_ctx;
- const struct i40e_pf *pf;
- uint64_t hash_queues;
-
- if (i40e_hash_validate_rss_common(rss_act, error))
- return -rte_errno;
-
- RTE_BUILD_BUG_ON(sizeof(hash_queues) != sizeof(pf->hash_enabled_queues));
-
- /* having RSS key is not supported */
- if (rss_act->key != NULL) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS key not supported");
- }
-
- /* queue region must be specified */
- if (rss_act->queue_num == 0) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS queues missing");
- }
-
- /* queue region must be power of two */
- if (!rte_is_power_of_2(rss_act->queue_num)) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS queue number must be power of two");
- }
-
- /* generic checks already filtered out discontiguous/non-unique RSS queues */
-
- /* queues must not exceed maximum queues per traffic class */
- if (rss_act->queue[rss_act->queue_num - 1] >= I40E_MAX_Q_PER_TC) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "Invalid RSS queue index");
- }
-
- /* queues must be in LUT */
- pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- hash_queues = (BIT_ULL(rss_act->queue[0] + rss_act->queue_num) - 1) &
- ~(BIT_ULL(rss_act->queue[0]) - 1);
-
- if (hash_queues & ~pf->hash_enabled_queues) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- rss_act, "Some queues are not in LUT");
- }
-
- return 0;
-}
-
-static int
-i40e_hash_validate_queue_list(const struct ci_flow_actions *actions,
- const struct ci_flow_actions_check_param *param,
- struct rte_flow_error *error)
-{
- const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf;
- struct rte_eth_dev *dev = param->driver_ctx;
- struct i40e_pf *pf;
- struct i40e_hw *hw;
- uint16_t max_queue;
- bool has_queue, has_key;
-
- if (i40e_hash_validate_rss_common(rss_act, error))
- return -rte_errno;
-
- has_queue = rss_act->queue != NULL;
- has_key = rss_act->key != NULL;
-
- /* if we have queues, we must not have key */
- if (has_queue && has_key) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "RSS key for queue region is not supported");
- }
-
- /* if there are no queues, no further checks needed */
- if (!has_queue)
- return 0;
-
- /* check queue number limits */
- hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (rss_act->queue_num > hw->func_caps.rss_table_size) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF,
- rss_act, "Too many RSS queues");
- }
-
- pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
- max_queue = i40e_pf_calc_configured_queues_num(pf);
- else
- max_queue = pf->dev_data->nb_rx_queues;
-
- max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC);
-
- /* we know RSS queues are contiguous so we only need to check last queue */
- if (rss_act->queue[rss_act->queue_num - 1] >= max_queue) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act,
- "Invalid RSS queue");
- }
-
- return 0;
-}
-
-int
-i40e_hash_parse(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct i40e_rte_flow_rss_conf *rss_conf,
- struct rte_flow_error *error)
-{
- struct ci_flow_actions parsed_actions;
- struct ci_flow_actions_check_param ac_param = {
- .allowed_types = (enum rte_flow_action_type[]) {
- RTE_FLOW_ACTION_TYPE_RSS,
- RTE_FLOW_ACTION_TYPE_END
- },
- .max_actions = 1,
- .driver_ctx = dev
- /* each pattern type will add specific check function */
- };
- const struct rte_flow_action_rss *rss_act;
- int ret;
-
- /*
- * We have two possible paths: global RSS configuration, and an RSS pattern action.
- *
- * For global patterns, we act on two types of flows:
- * - Empty pattern ([END])
- * - VLAN pattern ([VLAN] -> [END])
- *
- * Everything else is handled by pattern action parser.
- */
- bool is_empty, is_vlan;
-
- while (pattern->type == RTE_FLOW_ITEM_TYPE_VOID)
- pattern++;
-
- is_empty = pattern[0].type == RTE_FLOW_ITEM_TYPE_END;
- is_vlan = pattern[0].type == RTE_FLOW_ITEM_TYPE_VLAN &&
- pattern[1].type == RTE_FLOW_ITEM_TYPE_END;
-
- /* VLAN path */
- if (is_vlan) {
- ac_param.check = i40e_hash_validate_queue_region;
- ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error);
- if (ret)
- return ret;
- rss_act = parsed_actions.actions[0]->conf;
- /* set up RSS functions */
- rss_conf->conf.func = rss_act->func;
- return i40e_hash_parse_queue_region(pattern, rss_act, rss_conf, error);
- }
- /* Empty pattern path */
- if (is_empty) {
- ac_param.check = i40e_hash_validate_queue_list;
- ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error);
- if (ret)
- return ret;
- rss_act = parsed_actions.actions[0]->conf;
- rss_conf->conf.func = rss_act->func;
- /* if there is a queue list, take that path */
- if (rss_act->queue != NULL) {
- return i40e_hash_parse_queues(rss_act, rss_conf);
- }
- /* otherwise just parse RSS key */
- if (rss_act->key != NULL) {
- i40e_hash_parse_key(rss_act, rss_conf);
- }
- return 0;
- }
- ac_param.check = i40e_hash_validate_rss_pattern;
- ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error);
- if (ret)
- return ret;
- rss_act = parsed_actions.actions[0]->conf;
-
- /* pattern case */
- return i40e_hash_parse_pattern_act(dev, pattern, rss_act, rss_conf, error);
-}
-
static void
i40e_invalid_rss_filter(const struct i40e_rte_flow_rss_conf *ref_conf,
struct i40e_rte_flow_rss_conf *conf)
@@ -1459,13 +483,13 @@ i40e_hash_reset_conf(struct i40e_pf *pf,
int
i40e_hash_filter_destroy(struct i40e_pf *pf,
- const struct i40e_rss_filter *rss_filter)
+ const struct i40e_rte_flow_rss_conf *rss_conf)
{
struct i40e_rss_filter *filter;
int ret;
TAILQ_FOREACH(filter, &pf->rss_config_list, next) {
- if (rss_filter == filter) {
+ if (rss_conf == &filter->rss_filter_info) {
ret = i40e_hash_reset_conf(pf,
&filter->rss_filter_info);
if (ret)
diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h
index 99df4bccd0..a3c1588e93 100644
--- a/drivers/net/intel/i40e/i40e_hash.h
+++ b/drivers/net/intel/i40e/i40e_hash.h
@@ -13,18 +13,12 @@
extern "C" {
#endif
-int i40e_hash_parse(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct i40e_rte_flow_rss_conf *rss_conf,
- struct rte_flow_error *error);
-
int i40e_hash_filter_create(struct i40e_pf *pf,
struct i40e_rte_flow_rss_conf *rss_conf);
int i40e_hash_filter_restore(struct i40e_pf *pf);
int i40e_hash_filter_destroy(struct i40e_pf *pf,
- const struct i40e_rss_filter *rss_filter);
+ const struct i40e_rte_flow_rss_conf *rss_conf);
int i40e_hash_filter_flush(struct i40e_pf *pf);
#define I40E_RSS_KEY_LEN ((I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t))
diff --git a/drivers/net/intel/i40e/meson.build b/drivers/net/intel/i40e/meson.build
index 9cff46b5e6..bb15d10f95 100644
--- a/drivers/net/intel/i40e/meson.build
+++ b/drivers/net/intel/i40e/meson.build
@@ -28,6 +28,7 @@ sources += files(
'i40e_flow_ethertype.c',
'i40e_flow_fdir.c',
'i40e_flow_tunnel.c',
+ 'i40e_flow_hash.c',
'i40e_tm.c',
'i40e_hash.c',
'i40e_vf_representor.c',
--
2.47.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [RFC PATCH v1 00/21] Building a better rte_flow parser
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
` (20 preceding siblings ...)
2026-03-16 17:27 ` [RFC PATCH v1 21/21] net/i40e: reimplement hash parser Anatoly Burakov
@ 2026-03-17 0:42 ` Stephen Hemminger
21 siblings, 0 replies; 23+ messages in thread
From: Stephen Hemminger @ 2026-03-17 0:42 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
On Mon, 16 Mar 2026 17:27:28 +0000
Anatoly Burakov <anatoly.burakov@intel.com> wrote:
> Most rte_flow parsers in DPDK suffer from huge implementation complexity because
> even though 99% of what people use rte_flow parsers for is parsing protocol
> graphs, no parser is written explicitly as a graph. This patchset attempts to
> suggest a viable model to build rte_flow parsers as graphs, by offering a
> lightweight header only library to build rte_flow parsering graphs without too
> much boilerplate and complexity.
>
> Most of the patchset is about Intel drivers, but they are meant as
> reimplementations as well as examples for the rest of the community to assess
> how to build parsers using this new infrastructure. I expect the first two
> patches will be of most interest to non-Intel reviewers, as they deal with
> building two reusable parser architecture pieces.
>
> The first piece is a new flow graph helper in ethdev. Its purpose is
> deliberately narrow: it targets the protocol-graph part of rte_flow pattern
> parsing, where drivers walk packet headers and validate legal item sequences and
> parameters. That does not cover all possible rte_flow features, especially more
> exotic flow items, but it does cover a large and widely shared part of what
> existing drivers need to do. Or, to put it in other words, the only flow items
> this infrastructure *doesn't* cover is things that do not lend themselves well
> to be parsed as a graph of protocol headers (e.g. conntrack items). Everything
> else should be covered or cover-able.
>
> The second piece is a reusable flow engine framework for Intel Ethernet drivers.
> This is kept Intel-local for now because I do not feel it is generic enough to
> be presented as an ethdev-wide engine model. Even so, the intent is to establish
> a cleaner parser architecture with a defined interaction model, explicit memory
> ownership rules, and engine definitions that do not block secondary-process-safe
> usage. It is my hope that we could also promote something like this into ethdev
> proper and remove the necessity for drivers to build so much boilerplate around
> rte_flow parsing (and more often than not doing it in a way that is more complex
> than it needs to be).
>
> Most of the rest of the series is parser reimplementation, but that is mainly
> the vehicle for demonstrating and validating those two pieces. ixgbe and i40e
> are wired into the new common parsing path, and their existing parsers are
> migrated incrementally to the graph-based model. Besides reducing ad hoc parser
> code, this also makes validation more explicit and more consistent with the
> actual install path. In a few places that means invalid inputs that were
> previously ignored, deferred, or interpreted loosely are now rejected earlier
> and more strictly, without any increase in code complexity (in fact, with marked
> *decrease* of it!).
>
> Series depends on previously submitted patchsets:
>
> - IAVF global buffer fix [1]
> - Common attr parsing stuff [2]
>
> [1] https://patches.dpdk.org/project/dpdk/list/?series=37585&state=*
> [2] https://patches.dpdk.org/project/dpdk/list/?series=37663&state=*
>
> Anatoly Burakov (21):
> ethdev: add flow graph API
> net/intel/common: add flow engines infrastructure
> net/intel/common: add utility functions
> net/ixgbe: add support for common flow parsing
> net/ixgbe: reimplement ethertype parser
> net/ixgbe: reimplement syn parser
> net/ixgbe: reimplement L2 tunnel parser
> net/ixgbe: reimplement ntuple parser
> net/ixgbe: reimplement security parser
> net/ixgbe: reimplement FDIR parser
> net/ixgbe: reimplement hash parser
> net/i40e: add support for common flow parsing
> net/i40e: reimplement ethertype parser
> net/i40e: reimplement FDIR parser
> net/i40e: reimplement tunnel QinQ parser
> net/i40e: reimplement VXLAN parser
> net/i40e: reimplement NVGRE parser
> net/i40e: reimplement MPLS parser
> net/i40e: reimplement gtp parser
> net/i40e: reimplement L4 cloud parser
> net/i40e: reimplement hash parser
>
> drivers/net/intel/common/flow_engine.h | 1003 ++++
> drivers/net/intel/common/flow_util.h | 165 +
> drivers/net/intel/i40e/i40e_ethdev.c | 56 +-
> drivers/net/intel/i40e/i40e_ethdev.h | 49 +-
> drivers/net/intel/i40e/i40e_fdir.c | 47 -
> drivers/net/intel/i40e/i40e_flow.c | 4092 +----------------
> drivers/net/intel/i40e/i40e_flow.h | 44 +
> drivers/net/intel/i40e/i40e_flow_ethertype.c | 258 ++
> drivers/net/intel/i40e/i40e_flow_fdir.c | 1806 ++++++++
> drivers/net/intel/i40e/i40e_flow_hash.c | 1289 ++++++
> drivers/net/intel/i40e/i40e_flow_tunnel.c | 1510 ++++++
> drivers/net/intel/i40e/i40e_hash.c | 980 +---
> drivers/net/intel/i40e/i40e_hash.h | 8 +-
> drivers/net/intel/i40e/meson.build | 4 +
> drivers/net/intel/ixgbe/ixgbe_ethdev.c | 40 +-
> drivers/net/intel/ixgbe/ixgbe_ethdev.h | 13 +-
> drivers/net/intel/ixgbe/ixgbe_fdir.c | 13 +-
> drivers/net/intel/ixgbe/ixgbe_flow.c | 3130 +------------
> drivers/net/intel/ixgbe/ixgbe_flow.h | 38 +
> .../net/intel/ixgbe/ixgbe_flow_ethertype.c | 240 +
> drivers/net/intel/ixgbe/ixgbe_flow_fdir.c | 1510 ++++++
> drivers/net/intel/ixgbe/ixgbe_flow_hash.c | 182 +
> drivers/net/intel/ixgbe/ixgbe_flow_l2tun.c | 228 +
> drivers/net/intel/ixgbe/ixgbe_flow_ntuple.c | 483 ++
> drivers/net/intel/ixgbe/ixgbe_flow_security.c | 297 ++
> drivers/net/intel/ixgbe/ixgbe_flow_syn.c | 280 ++
> drivers/net/intel/ixgbe/meson.build | 7 +
> lib/ethdev/meson.build | 1 +
> lib/ethdev/rte_flow_graph.h | 414 ++
> 29 files changed, 9867 insertions(+), 8320 deletions(-)
> create mode 100644 drivers/net/intel/common/flow_engine.h
> create mode 100644 drivers/net/intel/common/flow_util.h
> create mode 100644 drivers/net/intel/i40e/i40e_flow.h
> create mode 100644 drivers/net/intel/i40e/i40e_flow_ethertype.c
> create mode 100644 drivers/net/intel/i40e/i40e_flow_fdir.c
> create mode 100644 drivers/net/intel/i40e/i40e_flow_hash.c
> create mode 100644 drivers/net/intel/i40e/i40e_flow_tunnel.c
> create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow.h
> create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_ethertype.c
> create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_fdir.c
> create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_hash.c
> create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_l2tun.c
> create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_ntuple.c
> create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_security.c
> create mode 100644 drivers/net/intel/ixgbe/ixgbe_flow_syn.c
> create mode 100644 lib/ethdev/rte_flow_graph.h
>
I ran patch review with Claude and current AGENTS.md file. Found some bugs right away.
---
I'll start by reading the AGENTS.md from the project and then examine the uploaded patch.This is a 21-patch series. Let me split it and read through each patch systematically.Let me start with the core API patch (01/21) which is the most important one to review carefully.Now let me read patches 02 and 03 (infrastructure):Now let me quickly scan the driver implementation patches for correctness issues:This is a massive series. Let me focus on the key correctness issues rather than reading every line of every patch. Let me check the most critical areas - the core API and the flow engine infrastructure for bugs.I found a key issue. Let me verify it:Good, I see the issue I suspected. Now let me check the `__flow_graph_visit_node` call with `NULL` item for the start node:Now let me check the `__flow_graph_visit_node` to see what happens when `item` is NULL (called for START node):Now let me also look at the `ci_flow_engine_conf_reset` for the use-after-free I noticed:I see the use-after-free: `ci_flow_free(engine, flow)` then `TAILQ_REMOVE(&engine_conf->flows, flow, node)` - the flow is freed before being removed from the list. Now let me check `ci_flow_create` for the same:I see another issue: `ci_flow_create` never inserts the flow into `engine_conf->flows` TAILQ on success. The flow is allocated, parsed, installed, but never added to the flow list. This means `ci_flow_destroy` and `ci_flow_flush` will never find it.
Let me also check i40e patches quickly for any additional issues:Now let me check the `__rte_internal` on `static inline` functions - that's unusual:Now let me write up the review. I've identified several significant issues across this 21-patch series.
---
**Review of [RFC PATCH v1 00/21] Building a better rte_flow parser**
**Patch 01/21: ethdev: add flow graph API**
**Error: NULL pointer dereference in `__flow_graph_node_is_expected` when called from start node**
`rte_flow_graph_parse()` calls `__flow_graph_visit_node(cur_node, ctx, NULL, error)` for the START node. `__flow_graph_visit_node` passes this NULL `item` to `__flow_graph_node_is_expected()`, which unconditionally dereferences it (`item->spec`, `item->mask`, `item->last`) when `constraints != RTE_FLOW_NODE_EXPECT_NONE`. If any start node has non-NONE constraints, this is a NULL dereference. In practice the existing start nodes all use `EXPECT_NONE` so the early return saves it, but the API is fragile — a single misconfigured graph will crash. Either guard with `if (item == NULL) return true;` before the dereferences, or document the contract that start nodes must always be `EXPECT_NONE`.
**Error: Use-after-free in `ci_flow_engine_conf_reset` (patch 02/21)**
In the flow cleanup loop:
```c
RTE_TAILQ_FOREACH_SAFE(flow, &engine_conf->flows, node, tmp) {
engine = ci_flow_engine_find(engine_list, flow->engine_type);
ci_flow_free(engine, flow); /* frees flow */
TAILQ_REMOVE(&engine_conf->flows, flow, node); /* accesses freed flow */
}
```
`ci_flow_free()` frees `flow`, then `TAILQ_REMOVE` dereferences `flow->node`. The remove must come before the free.
**Error: Flow never added to TAILQ in `ci_flow_create` (patch 02/21)**
`ci_flow_create()` allocates, parses, and installs a flow, but never calls `TAILQ_INSERT_TAIL(&engine_conf->flows, flow, node)`. This means `ci_flow_destroy` and `ci_flow_flush` will never find the flow (the `ci_flow_is_valid` TAILQ walk won't match), so flows are leaked on destroy/flush. The success path needs:
```c
if (ret == 0) {
flow->dev = engine_conf->dev;
flow->engine_type = engine->type;
TAILQ_INSERT_TAIL(&engine_conf->flows, flow, node);
goto unlock;
}
```
**Warning: `__rte_internal` on `static inline` functions is a no-op (patch 01/21)**
`__rte_internal` is a visibility/symbol-export annotation meaningful only for non-static functions that appear in linker symbol maps. On `static inline` functions in a header, it does nothing. Either drop the annotation (since these are header-only and already `static inline`), or if the intent is to mark them as internal API, use a comment instead.
**Warning: `enum rte_flow_graph_node_expect` brace on wrong line (patch 01/21)**
The opening brace is on its own line (`enum ... \n{`). DPDK style places it on the same line as `enum`.
**Patch 02/21: net/intel/common: add flow engines infrastructure**
**Warning: `ci_flow_alloc` erases only `struct ci_flow` base, not full allocation**
When the custom allocator succeeds, the code does `*flow = (struct ci_flow){0}` which only zeroes the base struct. The engine-specific fields beyond `ci_flow` in the allocation are left with whatever the custom allocator returned. If the custom allocator returns uninitialized memory (not zeroed), those engine-specific fields are garbage. The framework comment says "Custom allocators only own their own engine-specific fields" but the intent seems to be that the framework zeroes its own fields — documenting this more clearly or using `memset(flow, 0, sizeof(struct ci_flow))` explicitly would be safer.
**Patch 03/21: net/intel/common: add utility functions**
**Warning: `ci_be24_to_cpu` uses `int` arithmetic on `uint8_t` shift**
`val[0] << 16` promotes `val[0]` to `int` (signed). If the compiler's `int` is 16 bits (hypothetically), this would be undefined. In practice on all DPDK targets `int` is 32 bits so it's fine, but `(uint32_t)val[0] << 16` would be more precise.
**Patch 12/21: net/i40e: add support for common flow parsing**
**Warning: i40e's `struct rte_flow` has both `struct ci_flow base` and `TAILQ_ENTRY(rte_flow) node`**
The `ci_flow` base already contains a `TAILQ_ENTRY(ci_flow) node`. Having a second TAILQ_ENTRY in the i40e `rte_flow` struct means there are two independent list linkages. The old code uses `flow->node` via the legacy path; the new engine uses `flow->base.node` via `ci_flow`. This works during the transition but is confusing and wastes space. The legacy TAILQ_ENTRY should be removed once migration is complete.
**General observation across patches 04-21:**
The series progressively migrates each legacy flow parser to the new graph-based engine, which is a good incremental approach. The individual engine implementations (ethertype, syn, L2 tunnel, ntuple, security, FDIR, hash) appear to correctly translate the old hand-rolled parsing into graph nodes with validate/process callbacks. The graph definitions look structurally correct with proper edge termination sentinels.
The two critical bugs (use-after-free in reset, missing TAILQ insert in create) and the NULL dereference risk in the graph parser should be fixed before this moves beyond RFC.
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2026-03-17 0:42 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-16 17:27 [RFC PATCH v1 00/21] Building a better rte_flow parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 01/21] ethdev: add flow graph API Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 02/21] net/intel/common: add flow engines infrastructure Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 03/21] net/intel/common: add utility functions Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 04/21] net/ixgbe: add support for common flow parsing Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 05/21] net/ixgbe: reimplement ethertype parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 06/21] net/ixgbe: reimplement syn parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 07/21] net/ixgbe: reimplement L2 tunnel parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 08/21] net/ixgbe: reimplement ntuple parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 09/21] net/ixgbe: reimplement security parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 10/21] net/ixgbe: reimplement FDIR parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 11/21] net/ixgbe: reimplement hash parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 12/21] net/i40e: add support for common flow parsing Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 13/21] net/i40e: reimplement ethertype parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 14/21] net/i40e: reimplement FDIR parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 15/21] net/i40e: reimplement tunnel QinQ parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 16/21] net/i40e: reimplement VXLAN parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 17/21] net/i40e: reimplement NVGRE parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 18/21] net/i40e: reimplement MPLS parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 19/21] net/i40e: reimplement gtp parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 20/21] net/i40e: reimplement L4 cloud parser Anatoly Burakov
2026-03-16 17:27 ` [RFC PATCH v1 21/21] net/i40e: reimplement hash parser Anatoly Burakov
2026-03-17 0:42 ` [RFC PATCH v1 00/21] Building a better rte_flow parser Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox