* [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's
@ 2026-02-11 14:20 Anatoly Burakov
2026-02-11 14:20 ` [PATCH v1 01/25] net/intel/common: add common flow action parsing Anatoly Burakov
` (26 more replies)
0 siblings, 27 replies; 83+ messages in thread
From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw)
To: dev
This patchset introduces common flow attr/action checking infrastructure to
some Intel PMD's (IXGBE, I40E, IAVF, and ICE). The aim is to reduce code
duplication, simplify implementation of new parsers/verification of existing
ones, and make action/attr handling more consistent across drivers.
Note: this patchset depends on the following other patchsets:
1) PMD bug fix patchset: https://patches.dpdk.org/project/dpdk/list/?series=37333
2) PMD cleanups patchset: https://patches.dpdk.org/project/dpdk/list/?series=37334
Anatoly Burakov (20):
net/intel/common: add common flow action parsing
net/intel/common: add common flow attr validation
net/ixgbe: use common checks in ethertype filter
net/ixgbe: use common checks in syn filter
net/ixgbe: use common checks in L2 tunnel filter
net/ixgbe: use common checks in ntuple filter
net/ixgbe: use common checks in security filter
net/ixgbe: use common checks in FDIR filters
net/ixgbe: use common checks in RSS filter
net/i40e: use common flow attribute checks
net/i40e: refactor RSS flow parameter checks
net/i40e: use common action checks for ethertype
net/i40e: use common action checks for FDIR
net/i40e: use common action checks for tunnel
net/iavf: use common flow attribute checks
net/iavf: use common action checks for IPsec
net/iavf: use common action checks for hash
net/iavf: use common action checks for FDIR
net/iavf: use common action checks for fsub
net/iavf: use common action checks for flow query
Vladimir Medvedkin (5):
net/ice: use common flow attribute checks
net/ice: use common action checks for hash
net/ice: use common action checks for FDIR
net/ice: use common action checks for switch
net/ice: use common action checks for ACL
drivers/net/intel/common/flow_check.h | 308 ++++++
drivers/net/intel/i40e/i40e_ethdev.h | 1 -
drivers/net/intel/i40e/i40e_flow.c | 433 +++-----
drivers/net/intel/i40e/i40e_hash.c | 427 +++++---
drivers/net/intel/i40e/i40e_hash.h | 2 +-
drivers/net/intel/iavf/iavf_fdir.c | 367 +++----
drivers/net/intel/iavf/iavf_fsub.c | 266 +++--
drivers/net/intel/iavf/iavf_generic_flow.c | 103 +-
drivers/net/intel/iavf/iavf_generic_flow.h | 2 +-
drivers/net/intel/iavf/iavf_hash.c | 153 +--
drivers/net/intel/iavf/iavf_ipsec_crypto.c | 42 +-
drivers/net/intel/ice/ice_acl_filter.c | 146 ++-
drivers/net/intel/ice/ice_fdir_filter.c | 384 ++++---
drivers/net/intel/ice/ice_generic_flow.c | 59 +-
drivers/net/intel/ice/ice_generic_flow.h | 2 +-
drivers/net/intel/ice/ice_hash.c | 189 ++--
drivers/net/intel/ice/ice_switch_filter.c | 388 +++----
drivers/net/intel/ixgbe/ixgbe_flow.c | 1136 +++++++-------------
18 files changed, 2157 insertions(+), 2251 deletions(-)
create mode 100644 drivers/net/intel/common/flow_check.h
--
2.47.3
^ permalink raw reply [flat|nested] 83+ messages in thread* [PATCH v1 01/25] net/intel/common: add common flow action parsing 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 02/25] net/intel/common: add common flow attr validation Anatoly Burakov ` (25 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson Currently, each driver has their own code for action parsing, which results in a lot of duplication and subtle mismatches in behavior between drivers. Add common infrastructure, based on the following assumptions: - All drivers support at most 4 actions at once, but usually less - Not every action is supported by all drivers - We can check a few common things to filter out obviously wrong actions - Driver performs semantic checks on all valid actions So, the intention is to reject everything we can reasonably reject at the outset without knowing anything about the drivers, parametrize what is trivial to parametrize, and leave the rest for the driver to implement. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/common/flow_check.h | 238 ++++++++++++++++++++++++++ 1 file changed, 238 insertions(+) create mode 100644 drivers/net/intel/common/flow_check.h diff --git a/drivers/net/intel/common/flow_check.h b/drivers/net/intel/common/flow_check.h new file mode 100644 index 0000000000..1916ac169c --- /dev/null +++ b/drivers/net/intel/common/flow_check.h @@ -0,0 +1,238 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2025 Intel Corporation + */ + +#ifndef _COMMON_INTEL_FLOW_CHECK_H_ +#define _COMMON_INTEL_FLOW_CHECK_H_ + +#include <bus_pci_driver.h> +#include <ethdev_driver.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/* + * Common attr and action validation code for Intel drivers. + */ + +/** + * Maximum number of actions that can be stored in a parsed action list. + */ +#define CI_FLOW_PARSED_ACTIONS_MAX 4 + +/* Actions that are reasonably expected to have a conf structure */ +static const enum rte_flow_action_type need_conf[] = { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_VF, + RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR, + RTE_FLOW_ACTION_TYPE_COUNT, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_SECURITY, + RTE_FLOW_ACTION_TYPE_END +}; + +/** + * Is action type in this list of action types? + */ +__rte_internal +static inline bool +ci_flow_action_type_in_list(const enum rte_flow_action_type type, + const enum rte_flow_action_type list[]) +{ + size_t i = 0; + while (list[i] != RTE_FLOW_ACTION_TYPE_END) { + if (type == list[i]) + return true; + i++; + } + return false; +} + +/* Forward declarations */ +struct ci_flow_actions; +struct ci_flow_actions_check_param; + +/** + * Driver-specific action list validation callback. + * + * Performs driver-specific validation of action parameter list. + * Called after all actions have been parsed and added to the list, + * allowing validation based on the complete action set. + * + * @param actions + * The complete list of parsed actions (for context-dependent validation). + * @param driver_ctx + * Opaque driver context (e.g., adapter/queue configuration). + * @param error + * Pointer to rte_flow_error for reporting failures. + * @return + * 0 on success, negative errno on failure. + */ +typedef int (*ci_flow_actions_check_fn)( + const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error); + +/** + * List of actions that we know we've validated. + */ +struct ci_flow_actions { + /* Number of actions in the list. */ + uint8_t count; + /* Parsed actions array. */ + struct rte_flow_action const *actions[CI_FLOW_PARSED_ACTIONS_MAX]; +}; + +/** + * Parameters for action list validation.Any element can be NULL/0 as checks are only performed + * against constraints specified. + */ +struct ci_flow_actions_check_param { + /** + * Driver-specific context pointer (e.g., adapter/queue configuration). Can be NULL. + */ + void *driver_ctx; + /** + * Driver-specific action list validation callback. Can be NULL. + */ + ci_flow_actions_check_fn check; + /** + * Allowed action types for this parse parameter. Must be terminated with + * RTE_FLOW_ACTION_TYPE_END. Can be NULL. + */ + const enum rte_flow_action_type *allowed_types; + size_t max_actions; /**< Maximum number of actions allowed. */ +}; + +static inline int +__flow_action_check_rss(const struct rte_flow_action_rss *rss, struct rte_flow_error *error) +{ + size_t q, q_num = rss->queue_num; + if ((q_num == 0) == (rss->queue == NULL)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "If queue number is specified, queue array must also be specified"); + } + for (q = 1; q < q_num; q++) { + uint16_t qi = rss->queue[q]; + if (rss->queue[q - 1] + 1 != qi) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS queues must be contiguous"); + } + } + /* either we have both key and key length, or we have neither */ + if ((rss->key_len == 0) != (rss->key == NULL)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "If RSS key is specified, key length must also be specified"); + } + return 0; +} + +static inline int +__flow_action_check_generic(const struct rte_flow_action *action, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + /* is this action in our allowed list? */ + if (param != NULL && param->allowed_types != NULL && + !ci_flow_action_type_in_list(action->type, param->allowed_types)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Unsupported action"); + } + /* do we need to validate presence of conf? */ + if (ci_flow_action_type_in_list(action->type, need_conf)) { + if (action->conf == NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, action, + "Action requires configuration"); + } + } + + /* type-specific validation */ + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = + (const struct rte_flow_action_rss *)action->conf; + if (__flow_action_check_rss(rss, error) < 0) + return -rte_errno; + break; + } + default: + /* no specific validation */ + break; + } + + return 0; +} + +/** + * Validate and parse a list of rte_flow_action into a parsed action list. + * + * @param actions pointer to array of rte_flow_action, terminated by RTE_FLOW_ACTION_TYPE_END + * @param param pointer to ci_flow_actions_check_param structure (can be NULL) + * @param parsed_actions pointer to ci_flow_actions structure to store parsed actions + * @param error pointer to rte_flow_error structure for error reporting + * + * @return 0 on success, negative errno on failure. + */ +__rte_internal +static inline int +ci_flow_check_actions(const struct rte_flow_action *actions, + const struct ci_flow_actions_check_param *param, + struct ci_flow_actions *parsed_actions, + struct rte_flow_error *error) +{ + size_t i = 0; + + if (actions == NULL) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Missing actions"); + } + + /* reset the list */ + *parsed_actions = (struct ci_flow_actions){0}; + + while (actions[i].type != RTE_FLOW_ACTION_TYPE_END) { + const struct rte_flow_action *action = &actions[i++]; + + /* skip VOID actions */ + if (action->type == RTE_FLOW_ACTION_TYPE_VOID) + continue; + + /* generic validation for actions - this will check against param as well */ + if (__flow_action_check_generic(action, param, error) < 0) + return -rte_errno; + + /* add action to the list */ + if (parsed_actions->count >= RTE_DIM(parsed_actions->actions)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Too many actions"); + } + /* user may have specified a maximum number of actions */ + if (param != NULL && param->max_actions != 0 && + parsed_actions->count >= param->max_actions) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Too many actions"); + } + parsed_actions->actions[parsed_actions->count++] = action; + } + + /* now, call into user validation if specified */ + if (param != NULL && param->check != NULL) { + if (param->check(parsed_actions, param, error) < 0) + return -rte_errno; + } + /* if we didn't parse anything, valid action list is empty */ + return parsed_actions->count == 0 ? -EINVAL : 0; +} + +#ifdef __cplusplus +} +#endif + +#endif /* _COMMON_INTEL_FLOW_CHECK_H_ */ -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 02/25] net/intel/common: add common flow attr validation 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 01/25] net/intel/common: add common flow action parsing Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 03/25] net/ixgbe: use common checks in ethertype filter Anatoly Burakov ` (24 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson There are a lot of commonalities between what kinds of flow attr each Intel driver supports. Add a helper function that will validate attr based on common requirements and (optional) parameter checks. Things we check for: - Rejecting NULL attr (obviously) - Default to ingress flows - Transfer, group, priority, and egress are not allowed unless requested Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/common/flow_check.h | 70 +++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/drivers/net/intel/common/flow_check.h b/drivers/net/intel/common/flow_check.h index 1916ac169c..b7c73fcf7c 100644 --- a/drivers/net/intel/common/flow_check.h +++ b/drivers/net/intel/common/flow_check.h @@ -53,6 +53,7 @@ ci_flow_action_type_in_list(const enum rte_flow_action_type type, /* Forward declarations */ struct ci_flow_actions; struct ci_flow_actions_check_param; +struct ci_flow_attr_check_param; /** * Driver-specific action list validation callback. @@ -231,6 +232,75 @@ ci_flow_check_actions(const struct rte_flow_action *actions, return parsed_actions->count == 0 ? -EINVAL : 0; } +/** + * Parameter structure for attr check. + */ +struct ci_flow_attr_check_param { + bool allow_priority; /**< True if priority attribute is allowed. */ + bool allow_transfer; /**< True if transfer attribute is allowed. */ + bool allow_group; /**< True if group attribute is allowed. */ + bool expect_egress; /**< True if egress attribute is expected. */ +}; + +/** + * Validate rte_flow_attr structure against specified constraints. + * + * @param attr Pointer to rte_flow_attr structure to validate. + * @param attr_param Pointer to ci_flow_attr_check_param structure specifying constraints. + * @param error Pointer to rte_flow_error structure for error reporting. + * + * @return 0 on success, negative errno on failure. + */ +__rte_internal +static inline int +ci_flow_check_attr(const struct rte_flow_attr *attr, + const struct ci_flow_attr_check_param *attr_param, + struct rte_flow_error *error) +{ + if (attr == NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, attr, + "NULL attribute"); + } + + /* Direction must be either ingress or egress */ + if (attr->ingress == attr->egress) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, attr, + "Either ingress or egress must be set"); + } + + /* Expect ingress by default */ + if (attr->egress && (attr_param == NULL || !attr_param->expect_egress)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, attr, + "Egress not supported"); + } + + /* May not be supported */ + if (attr->transfer && (attr_param == NULL || !attr_param->allow_transfer)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, attr, + "Transfer not supported"); + } + + /* May not be supported */ + if (attr->group && (attr_param == NULL || !attr_param->allow_group)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, attr, + "Group not supported"); + } + + /* May not be supported */ + if (attr->priority && (attr_param == NULL || !attr_param->allow_priority)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, attr, + "Priority not supported"); + } + + return 0; +} + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 03/25] net/ixgbe: use common checks in ethertype filter 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 01/25] net/intel/common: add common flow action parsing Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 02/25] net/intel/common: add common flow attr validation Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 04/25] net/ixgbe: use common checks in syn filter Anatoly Burakov ` (23 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in ethertype. This allows us to remove some checks as they are no longer necessary (such as whether DROP flag was set - if we do not accept DROP actions, we do not set the DROP flag), as well as make some checks more stringent (such as rejecting more than one action). Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 190 ++++++++++----------------- 1 file changed, 73 insertions(+), 117 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index cd8d46019f..7695d57e2a 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -43,6 +43,8 @@ #include "base/ixgbe_phy.h" #include "rte_pmd_ixgbe.h" +#include "../common/flow_check.h" + #define IXGBE_MIN_N_TUPLE_PRIO 1 #define IXGBE_MAX_N_TUPLE_PRIO 7 @@ -133,6 +135,41 @@ const struct rte_flow_action *next_no_void_action( } } +/* + * All ixgbe engines mostly check the same stuff, so use a common check. + */ +static int +ixgbe_flow_actions_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action *action; + struct rte_eth_dev *dev = (struct rte_eth_dev *)param->driver_ctx; + size_t idx; + + for (idx = 0; idx < actions->count; idx++) { + action = actions->actions[idx]; + + switch(action->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *queue = action->conf; + if (queue->index >= dev->data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "queue index out of range"); + } + break; + } + default: + /* no specific validation */ + break; + } + } + return 0; +} + /** * Please be aware there's an assumption for all the parsers. * rte_flow_item is using big endian, rte_flow_attr and @@ -712,38 +749,14 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev, * item->last should be NULL. */ static int -cons_parse_ethertype_filter(const struct rte_flow_attr *attr, - const struct rte_flow_item *pattern, - const struct rte_flow_action *actions, - struct rte_eth_ethertype_filter *filter, - struct rte_flow_error *error) +cons_parse_ethertype_filter(const struct rte_flow_item *pattern, + const struct rte_flow_action *action, + struct rte_eth_ethertype_filter *filter, + struct rte_flow_error *error) { const struct rte_flow_item *item; - const struct rte_flow_action *act; const struct rte_flow_item_eth *eth_spec; const struct rte_flow_item_eth *eth_mask; - const struct rte_flow_action_queue *act_q; - - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } item = next_no_void_pattern(pattern, NULL); /* The first non-void item should be MAC. */ @@ -813,87 +826,30 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr, return -rte_errno; } - /* Parse action */ - - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE && - act->type != RTE_FLOW_ACTION_TYPE_DROP) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = (const struct rte_flow_action_queue *)act->conf; - filter->queue = act_q->index; - } else { - filter->flags |= RTE_ETHTYPE_FLAGS_DROP; - } - - /* Check if the next non-void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* Parse attr */ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->transfer) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - /* Not supported */ - if (attr->priority) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "Not support group."); - return -rte_errno; - } + filter->queue = ((const struct rte_flow_action_queue *)action->conf)->index; return 0; } static int -ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_eth_ethertype_filter *filter, - struct rte_flow_error *error) +ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_eth_ethertype_filter *filter, struct rte_flow_error *error) { int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only queue is allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .driver_ctx = dev, + .check = ixgbe_flow_actions_check + }; + const struct rte_flow_action *action; if (hw->mac.type != ixgbe_mac_82599EB && hw->mac.type != ixgbe_mac_X540 && @@ -903,19 +859,27 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, hw->mac.type != ixgbe_mac_E610) return -ENOTSUP; - ret = cons_parse_ethertype_filter(attr, pattern, - actions, filter, error); + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); if (ret) return ret; - if (filter->queue >= dev->data->nb_rx_queues) { - memset(filter, 0, sizeof(struct rte_eth_ethertype_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "queue index much too big"); - return -rte_errno; + /* only one action is supported */ + if (parsed_actions.count > 1) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + parsed_actions.actions[1], + "Only one action can be specified at a time"); } + action = parsed_actions.actions[0]; + + ret = cons_parse_ethertype_filter(pattern, action, filter, error); + if (ret) + return ret; if (filter->ether_type == RTE_ETHER_TYPE_IPV4 || filter->ether_type == RTE_ETHER_TYPE_IPV6) { @@ -934,14 +898,6 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) { - memset(filter, 0, sizeof(struct rte_eth_ethertype_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "drop option is unsupported"); - return -rte_errno; - } - return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 04/25] net/ixgbe: use common checks in syn filter 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (2 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 03/25] net/ixgbe: use common checks in ethertype filter Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 05/25] net/ixgbe: use common checks in L2 tunnel filter Anatoly Burakov ` (22 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in syn filter. Some checks have been rearranged or become more stringent due to using common infrastructure. In particular, group attr was ignored previously but is now explicitly rejected. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 130 +++++++-------------------- 1 file changed, 32 insertions(+), 98 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 7695d57e2a..0aa96f04bb 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -922,38 +922,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, const struct rte_flow_attr * item->last should be NULL. */ static int -cons_parse_syn_filter(const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_eth_syn_filter *filter, - struct rte_flow_error *error) +cons_parse_syn_filter(const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], + const struct rte_flow_action_queue *q_act, struct rte_eth_syn_filter *filter, + struct rte_flow_error *error) { const struct rte_flow_item *item; - const struct rte_flow_action *act; const struct rte_flow_item_tcp *tcp_spec; const struct rte_flow_item_tcp *tcp_mask; - const struct rte_flow_action_queue *act_q; - - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } /* the first not void item should be MAC or IPv4 or IPv6 or TCP */ @@ -1061,63 +1036,7 @@ cons_parse_syn_filter(const struct rte_flow_attr *attr, return -rte_errno; } - /* check if the first not void action is QUEUE. */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - act_q = (const struct rte_flow_action_queue *)act->conf; - filter->queue = act_q->index; - if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } + filter->queue = q_act->index; /* Support 2 priorities, the lowest or highest. */ if (!attr->priority) { @@ -1128,7 +1047,7 @@ cons_parse_syn_filter(const struct rte_flow_attr *attr, memset(filter, 0, sizeof(struct rte_eth_syn_filter)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); + attr, "Priority can be 0 or 0xFFFFFFFF"); return -rte_errno; } @@ -1136,15 +1055,27 @@ cons_parse_syn_filter(const struct rte_flow_attr *attr, } static int -ixgbe_parse_syn_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_eth_syn_filter *filter, - struct rte_flow_error *error) +ixgbe_parse_syn_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_eth_syn_filter *filter, struct rte_flow_error *error) { int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only queue is allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev, + .check = ixgbe_flow_actions_check, + .max_actions = 1, + }; + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; + const struct rte_flow_action *action; if (hw->mac.type != ixgbe_mac_82599EB && hw->mac.type != ixgbe_mac_X540 && @@ -1154,16 +1085,19 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev, hw->mac.type != ixgbe_mac_E610) return -ENOTSUP; - ret = cons_parse_syn_filter(attr, pattern, - actions, filter, error); - - if (filter->queue >= dev->data->nb_rx_queues) - return -rte_errno; + /* validate attributes */ + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + return ret; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); if (ret) return ret; - return 0; + action = parsed_actions.actions[0]; + + return cons_parse_syn_filter(attr, pattern, action->conf, filter, error); } /** -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 05/25] net/ixgbe: use common checks in L2 tunnel filter 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (3 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 04/25] net/ixgbe: use common checks in syn filter Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 06/25] net/ixgbe: use common checks in ntuple filter Anatoly Burakov ` (21 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in L2 tunnel filter. Some checks have become more stringent as a result, in particular, group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 148 +++++++++------------------ 1 file changed, 47 insertions(+), 101 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 0aa96f04bb..00104636a7 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -145,6 +145,7 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions, { const struct rte_flow_action *action; struct rte_eth_dev *dev = (struct rte_eth_dev *)param->driver_ctx; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); size_t idx; for (idx = 0; idx < actions->count; idx++) { @@ -162,6 +163,17 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions, } break; } + case RTE_FLOW_ACTION_TYPE_VF: + { + const struct rte_flow_action_vf *vf = action->conf; + if (vf->id >= pci_dev->max_vfs) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "VF id out of range"); + } + break; + } default: /* no specific validation */ break; @@ -869,12 +881,6 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, const struct rte_flow_attr if (ret) return ret; - /* only one action is supported */ - if (parsed_actions.count > 1) { - return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - parsed_actions.actions[1], - "Only one action can be specified at a time"); - } action = parsed_actions.actions[0]; ret = cons_parse_ethertype_filter(pattern, action, filter, error); @@ -1120,40 +1126,16 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr */ static int cons_parse_l2_tn_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct rte_flow_action *action, struct ixgbe_l2_tunnel_conf *filter, struct rte_flow_error *error) { const struct rte_flow_item *item; const struct rte_flow_item_e_tag *e_tag_spec; const struct rte_flow_item_e_tag *e_tag_mask; - const struct rte_flow_action *act; - const struct rte_flow_action_vf *act_vf; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } - /* The first not void item should be e-tag. */ item = next_no_void_pattern(pattern, NULL); if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) { @@ -1211,71 +1193,13 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev, return -rte_errno; } - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - /* not supported */ - if (attr->priority) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* check if the first not void action is VF or PF. */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_VF && - act->type != RTE_FLOW_ACTION_TYPE_PF) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - if (act->type == RTE_FLOW_ACTION_TYPE_VF) { - act_vf = (const struct rte_flow_action_vf *)act->conf; + if (action->type == RTE_FLOW_ACTION_TYPE_VF) { + const struct rte_flow_action_vf *act_vf = action->conf; filter->pool = act_vf->id; } else { filter->pool = pci_dev->max_vfs; } - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - return 0; } @@ -1287,29 +1211,51 @@ ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev, struct ixgbe_l2_tunnel_conf *l2_tn_filter, struct rte_flow_error *error) { - int ret = 0; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - uint16_t vf_num; - - ret = cons_parse_l2_tn_filter(dev, attr, pattern, - actions, l2_tn_filter, error); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only vf/pf is allowed here */ + RTE_FLOW_ACTION_TYPE_VF, + RTE_FLOW_ACTION_TYPE_PF, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev, + .check = ixgbe_flow_actions_check, + .max_actions = 1, + }; + int ret = 0; + const struct rte_flow_action *action; if (hw->mac.type != ixgbe_mac_X550 && hw->mac.type != ixgbe_mac_X550EM_x && hw->mac.type != ixgbe_mac_X550EM_a && hw->mac.type != ixgbe_mac_E610) { - memset(l2_tn_filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "Not supported by L2 tunnel filter"); return -rte_errno; } - vf_num = pci_dev->max_vfs; + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; - if (l2_tn_filter->pool > vf_num) - return -rte_errno; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + + /* only one action is supported */ + if (parsed_actions.count > 1) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + parsed_actions.actions[1], + "Only one action can be specified at a time"); + } + action = parsed_actions.actions[0]; + + ret = cons_parse_l2_tn_filter(dev, pattern, action, l2_tn_filter, error); return ret; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 06/25] net/ixgbe: use common checks in ntuple filter 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (4 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 05/25] net/ixgbe: use common checks in L2 tunnel filter Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 07/25] net/ixgbe: use common checks in security filter Anatoly Burakov ` (20 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in ntuple filter. As a result, some checks have become more stringent, in particular the group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 133 ++++++++------------------- 1 file changed, 36 insertions(+), 97 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 00104636a7..46161e6146 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -219,12 +219,11 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions, static int cons_parse_ntuple_filter(const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct rte_flow_action_queue *q_act, struct rte_eth_ntuple_filter *filter, struct rte_flow_error *error) { const struct rte_flow_item *item; - const struct rte_flow_action *act; const struct rte_flow_item_ipv4 *ipv4_spec; const struct rte_flow_item_ipv4 *ipv4_mask; const struct rte_flow_item_tcp *tcp_spec; @@ -240,24 +239,11 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr, struct rte_flow_item_eth eth_null; struct rte_flow_item_vlan vlan_null; - if (!pattern) { - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; + /* Priority must be 16-bit */ + if (attr->priority > UINT16_MAX) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, attr, + "Priority must be 16-bit"); } memset(ð_null, 0, sizeof(struct rte_flow_item_eth)); @@ -553,70 +539,11 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr, action: - /** - * n-tuple only supports forwarding, - * check if the first not void action is QUEUE. - */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - item, "Not supported action."); - return -rte_errno; - } - filter->queue = - ((const struct rte_flow_action_queue *)act->conf)->index; + filter->queue = q_act->index; - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - if (attr->priority > 0xFFFF) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Error priority."); - return -rte_errno; - } filter->priority = (uint16_t)attr->priority; - if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO || - attr->priority > IXGBE_MAX_N_TUPLE_PRIO) - filter->priority = 1; + if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO || attr->priority > IXGBE_MAX_N_TUPLE_PRIO) + filter->priority = 1; return 0; } @@ -705,15 +632,40 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev, struct rte_eth_ntuple_filter *filter, struct rte_flow_error *error) { - int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only queue is allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev, + .check = ixgbe_flow_actions_check, + .max_actions = 1, + }; + const struct rte_flow_action *action; + int ret; if (hw->mac.type != ixgbe_mac_82599EB && hw->mac.type != ixgbe_mac_X540) return -ENOTSUP; - ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error); + /* validate attributes */ + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + return ret; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + action = parsed_actions.actions[0]; + + ret = cons_parse_ntuple_filter(attr, pattern, action->conf, filter, error); if (ret) return ret; @@ -726,19 +678,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev, return -rte_errno; } - /* Ixgbe doesn't support many priorities. */ - if (filter->priority < IXGBE_MIN_N_TUPLE_PRIO || - filter->priority > IXGBE_MAX_N_TUPLE_PRIO) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "Priority not supported by ntuple filter"); - return -rte_errno; - } - - if (filter->queue >= dev->data->nb_rx_queues) - return -rte_errno; - /* fixed value for ixgbe */ filter->flags = RTE_5TUPLE_FLAGS; return 0; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 07/25] net/ixgbe: use common checks in security filter 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (5 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 06/25] net/ixgbe: use common checks in ntuple filter Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 08/25] net/ixgbe: use common checks in FDIR filters Anatoly Burakov ` (19 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in security filter. As a result, some checks have become more stringent. In particular, group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 61 ++++++++++------------------ 1 file changed, 22 insertions(+), 39 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 46161e6146..38a8002611 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -556,7 +556,17 @@ ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); const struct rte_flow_action_security *security; const struct rte_flow_item *item; - const struct rte_flow_action *act; + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only security is allowed here */ + RTE_FLOW_ACTION_TYPE_SECURITY, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + }; + const struct rte_flow_action *action; + int ret; if (hw->mac.type != ixgbe_mac_82599EB && hw->mac.type != ixgbe_mac_X540 && @@ -566,45 +576,18 @@ ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr hw->mac.type != ixgbe_mac_E610) return -ENOTSUP; - if (pattern == NULL) { - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - if (actions == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - if (attr == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; - /* check if next non-void action is security */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - } - security = act->conf; - if (security == NULL) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "NULL security action config."); - } - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - } + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + + action = parsed_actions.actions[0]; + security = action->conf; /* get the IP pattern*/ item = next_no_void_pattern(pattern, NULL); -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 08/25] net/ixgbe: use common checks in FDIR filters 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (6 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 07/25] net/ixgbe: use common checks in security filter Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 09/25] net/ixgbe: use common checks in RSS filter Anatoly Burakov ` (18 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in flow director filters (both tunnel and normal). As a result, some checks have become more stringent, in particular group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 290 ++++++++++++--------------- 1 file changed, 128 insertions(+), 162 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 38a8002611..c857eb0227 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -1182,111 +1182,6 @@ ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev, return ret; } -/* Parse to get the attr and action info of flow director rule. */ -static int -ixgbe_parse_fdir_act_attr(const struct rte_flow_attr *attr, - const struct rte_flow_action actions[], - struct ixgbe_fdir_rule *rule, - struct rte_flow_error *error) -{ - const struct rte_flow_action *act; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark; - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - /* not supported */ - if (attr->priority) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* check if the first not void action is QUEUE or DROP. */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE && - act->type != RTE_FLOW_ACTION_TYPE_DROP) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = (const struct rte_flow_action_queue *)act->conf; - rule->queue = act_q->index; - } else { /* drop */ - /* signature mode does not support drop action. */ - if (rule->mode == RTE_FDIR_MODE_SIGNATURE) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - rule->fdirflags = IXGBE_FDIRCMD_DROP; - } - - /* check if the next not void item is MARK */ - act = next_no_void_action(actions, act); - if ((act->type != RTE_FLOW_ACTION_TYPE_MARK) && - (act->type != RTE_FLOW_ACTION_TYPE_END)) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - rule->soft_id = 0; - - if (act->type == RTE_FLOW_ACTION_TYPE_MARK) { - mark = (const struct rte_flow_action_mark *)act->conf; - rule->soft_id = mark->id; - act = next_no_void_action(actions, act); - } - - /* check if the next not void item is END */ - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - return 0; -} - /* search next no void pattern and skip fuzzy */ static inline const struct rte_flow_item *next_no_fuzzy_pattern( @@ -1393,9 +1288,8 @@ static inline uint8_t signature_match(const struct rte_flow_item pattern[]) */ static int ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct ci_flow_actions *parsed_actions, struct ixgbe_fdir_rule *rule, struct rte_flow_error *error) { @@ -1416,29 +1310,39 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, const struct rte_flow_item_vlan *vlan_mask; const struct rte_flow_item_raw *raw_mask; const struct rte_flow_item_raw *raw_spec; + const struct rte_flow_action *fwd_action, *aux_action; + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint8_t j; - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + fwd_action = parsed_actions->actions[0]; + /* can be NULL */ + aux_action = parsed_actions->actions[1]; - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } + /* check if this is a signature match */ + if (signature_match(pattern)) + rule->mode = RTE_FDIR_MODE_SIGNATURE; + else + rule->mode = RTE_FDIR_MODE_PERFECT; - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; + /* set up action */ + if (fwd_action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *q_act = fwd_action->conf; + rule->queue = q_act->index; + } else { + /* signature mode does not support drop action. */ + if (rule->mode == RTE_FDIR_MODE_SIGNATURE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, fwd_action, + "Signature mode does not support drop action."); + return -rte_errno; + } + rule->fdirflags = IXGBE_FDIRCMD_DROP; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; + /* set up mark action */ + if (aux_action != NULL && aux_action->type == RTE_FLOW_ACTION_TYPE_MARK) { + const struct rte_flow_action_mark *m_act = aux_action->conf; + rule->soft_id = m_act->id; } /** @@ -1470,11 +1374,6 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, return -rte_errno; } - if (signature_match(pattern)) - rule->mode = RTE_FDIR_MODE_SIGNATURE; - else - rule->mode = RTE_FDIR_MODE_PERFECT; - /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -2063,7 +1962,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, } } - return ixgbe_parse_fdir_act_attr(attr, actions, rule, error); + return 0; } #define NVGRE_PROTOCOL 0x6558 @@ -2106,9 +2005,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, * item->last should be NULL. */ static int -ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], +ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_item pattern[], + const struct ci_flow_actions *parsed_actions, struct ixgbe_fdir_rule *rule, struct rte_flow_error *error) { @@ -2121,27 +2019,25 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, const struct rte_flow_item_eth *eth_mask; const struct rte_flow_item_vlan *vlan_spec; const struct rte_flow_item_vlan *vlan_mask; + const struct rte_flow_action *fwd_action, *aux_action; uint32_t j; - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } + fwd_action = parsed_actions->actions[0]; + /* can be NULL */ + aux_action = parsed_actions->actions[1]; - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; + /* set up queue/drop action */ + if (fwd_action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *q_act = fwd_action->conf; + rule->queue = q_act->index; + } else { + rule->fdirflags = IXGBE_FDIRCMD_DROP; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; + /* set up mark action */ + if (aux_action != NULL && aux_action->type == RTE_FLOW_ACTION_TYPE_MARK) { + const struct rte_flow_action_mark *mark = aux_action->conf; + rule->soft_id = mark->id; } /** @@ -2552,7 +2448,56 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, * Do nothing. */ - return ixgbe_parse_fdir_act_attr(attr, actions, rule, error); + return 0; +} + +/* + * Check flow director actions + */ +static int +ixgbe_fdir_actions_check(const struct ci_flow_actions *parsed_actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) +{ + const enum rte_flow_action_type fwd_actions[] = { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END + }; + const struct rte_flow_action *action, *drop_action = NULL; + + /* do the generic checks first */ + int ret = ixgbe_flow_actions_check(parsed_actions, param, error); + if (ret) + return ret; + + /* first action must be a forwarding action */ + action = parsed_actions->actions[0]; + if (!ci_flow_action_type_in_list(action->type, fwd_actions)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "First action must be QUEUE or DROP"); + } + /* remember if we have a drop action */ + if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { + drop_action = action; + } + + /* second action, if specified, must not be a forwarding action */ + action = parsed_actions->actions[1]; + if (action != NULL && ci_flow_action_type_in_list(action->type, fwd_actions)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Conflicting actions"); + } + /* if we didn't have a drop action before but now we do, remember that */ + if (drop_action == NULL && action != NULL && action->type == RTE_FLOW_ACTION_TYPE_DROP) { + drop_action = action; + } + /* drop must be the only action */ + if (drop_action != NULL && action != NULL) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Conflicting actions"); + } + return 0; } static int @@ -2566,24 +2511,45 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev, int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* queue/mark/drop allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev, + .check = ixgbe_fdir_actions_check + }; + + if (hw->mac.type != ixgbe_mac_82599EB && + hw->mac.type != ixgbe_mac_X540 && + hw->mac.type != ixgbe_mac_X550 && + hw->mac.type != ixgbe_mac_X550EM_x && + hw->mac.type != ixgbe_mac_X550EM_a && + hw->mac.type != ixgbe_mac_E610) + return -ENOTSUP; + + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + fdir_conf->drop_queue = IXGBE_FDIR_DROP_QUEUE; - if (hw->mac.type != ixgbe_mac_82599EB && - hw->mac.type != ixgbe_mac_X540 && - hw->mac.type != ixgbe_mac_X550 && - hw->mac.type != ixgbe_mac_X550EM_x && - hw->mac.type != ixgbe_mac_X550EM_a && - hw->mac.type != ixgbe_mac_E610) - return -ENOTSUP; - - ret = ixgbe_parse_fdir_filter_normal(dev, attr, pattern, - actions, rule, error); + ret = ixgbe_parse_fdir_filter_normal(dev, pattern, &parsed_actions, rule, error); if (!ret) goto step_next; - ret = ixgbe_parse_fdir_filter_tunnel(attr, pattern, - actions, rule, error); + ret = ixgbe_parse_fdir_filter_tunnel(pattern, &parsed_actions, rule, error); if (ret) return ret; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 09/25] net/ixgbe: use common checks in RSS filter 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (7 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 08/25] net/ixgbe: use common checks in FDIR filters Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 10/25] net/i40e: use common flow attribute checks Anatoly Burakov ` (17 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in RSS filter. As a result, some checks have become more stringent, in particular: - the group attribute is now explicitly rejected instead of being ignored - RSS now explicitly rejects unsupported RSS types at validation - the priority attribute was previously allowed but rejected values bigger than 0xFFFF despite not using priority anywhere - it is now rejected Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 202 +++++++++++---------------- 1 file changed, 85 insertions(+), 117 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index c857eb0227..6d1986f444 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -121,20 +121,6 @@ const struct rte_flow_item *next_no_void_pattern( } } -static inline -const struct rte_flow_action *next_no_void_action( - const struct rte_flow_action actions[], - const struct rte_flow_action *cur) -{ - const struct rte_flow_action *next = - cur ? cur + 1 : &actions[0]; - while (1) { - if (next->type != RTE_FLOW_ACTION_TYPE_VOID) - return next; - next++; - } -} - /* * All ixgbe engines mostly check the same stuff, so use a common check. */ @@ -2579,6 +2565,62 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev, return ret; } +/* Flow actions check specific to RSS filter */ +static int +ixgbe_flow_actions_check_rss(const struct ci_flow_actions *parsed_actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action *action = parsed_actions->actions[0]; + const struct rte_flow_action_rss *rss_act = action->conf; + struct rte_eth_dev *dev = param->driver_ctx; + const size_t rss_key_len = sizeof(((struct ixgbe_rte_flow_rss_conf *)0)->key); + size_t q_idx, q; + + /* check if queue list is not empty */ + if (rss_act->queue_num == 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queue list is empty"); + } + + /* check if each RSS queue is valid */ + for (q_idx = 0; q_idx < rss_act->queue_num; q_idx++) { + q = rss_act->queue[q_idx]; + if (q >= dev->data->nb_rx_queues) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS queue specified"); + } + } + + /* only support default hash function */ + if (rss_act->func != RTE_ETH_HASH_FUNCTION_DEFAULT) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Non-default RSS hash functions are not supported"); + } + /* levels aren't supported */ + if (rss_act->level) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "A nonzero RSS encapsulation level is not supported"); + } + /* check key length */ + if (rss_act->key_len != 0 && rss_act->key_len != rss_key_len) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key must be exactly 40 bytes long"); + } + /* filter out unsupported RSS types */ + if ((rss_act->types & ~IXGBE_RSS_OFFLOAD_ALL) != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS type specified"); + } + return 0; +} + static int ixgbe_parse_rss_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -2586,109 +2628,35 @@ ixgbe_parse_rss_filter(struct rte_eth_dev *dev, struct ixgbe_rte_flow_rss_conf *rss_conf, struct rte_flow_error *error) { - const struct rte_flow_action *act; - const struct rte_flow_action_rss *rss; - uint16_t n; - - /** - * rss only supports forwarding, - * check if the first not void action is RSS. - */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_RSS) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - rss = (const struct rte_flow_action_rss *)act->conf; - - if (!rss || !rss->queue_num) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, - "no valid queues"); - return -rte_errno; - } - - for (n = 0; n < rss->queue_num; n++) { - if (rss->queue[n] >= dev->data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, - "queue id > max number of queues"); - return -rte_errno; - } - } - - if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "non-default RSS hash functions are not supported"); - if (rss->level) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "a nonzero RSS encapsulation level is not supported"); - if (rss->key_len && rss->key_len != RTE_DIM(rss_conf->key)) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "RSS hash key must be exactly 40 bytes"); - if (rss->queue_num > RTE_DIM(rss_conf->queue)) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "too many queues for RSS context"); - if (ixgbe_rss_conf_init(rss_conf, rss)) - return rte_flow_error_set - (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, act, - "RSS context initialization failure"); - - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - if (attr->priority > 0xFFFF) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Error priority."); - return -rte_errno; - } + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only rss allowed here */ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev, + .check = ixgbe_flow_actions_check_rss, + .max_actions = 1, + }; + int ret; + const struct rte_flow_action *action; + + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + action = parsed_actions.actions[0]; + + if (ixgbe_rss_conf_init(rss_conf, action->conf)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "RSS context initialization failure"); return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 10/25] net/i40e: use common flow attribute checks 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (8 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 09/25] net/ixgbe: use common checks in RSS filter Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 11/25] net/i40e: refactor RSS flow parameter checks Anatoly Burakov ` (16 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson There is no need to call the same attribute checks in multiple places when there are no cases where we do otherwise. Therefore, remove all the attribute check calls from all filter call paths and instead check the attributes once at the very beginning of flow validation, and use common flow attribute checks instead of driver-local ones. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_ethdev.h | 1 - drivers/net/intel/i40e/i40e_flow.c | 121 +++------------------------ 2 files changed, 10 insertions(+), 112 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h index 025901edb6..312a9db12d 100644 --- a/drivers/net/intel/i40e/i40e_ethdev.h +++ b/drivers/net/intel/i40e/i40e_ethdev.h @@ -1315,7 +1315,6 @@ struct i40e_filter_ctx { }; typedef int (*parse_filter_t)(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index c5bb787f28..05abaf8ce3 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -25,6 +25,8 @@ #include "i40e_ethdev.h" #include "i40e_hash.h" +#include "../common/flow_check.h" + #define I40E_IPV6_TC_MASK (0xFF << I40E_FDIR_IPv6_TC_OFFSET) #define I40E_IPV6_FRAG_HEADER 44 #define I40E_TENANT_ARRAY_NUM 3 @@ -73,40 +75,32 @@ static int i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev, const struct rte_flow_action *actions, struct rte_flow_error *error, struct i40e_tunnel_filter_conf *filter); -static int i40e_flow_parse_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error); static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -120,7 +114,6 @@ static int i40e_flow_flush_ethertype_filter(struct i40e_pf *pf); static int i40e_flow_flush_tunnel_filter(struct i40e_pf *pf); static int i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -132,7 +125,6 @@ i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev, struct i40e_tunnel_filter_conf *filter); static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -1210,54 +1202,6 @@ i40e_find_parse_filter_func(struct rte_flow_item *pattern, uint32_t *idx) return parse_filter; } -/* Parse attributes */ -static int -i40e_flow_parse_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error) -{ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->transfer) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "Not support transfer."); - return -rte_errno; - } - - /* Not supported */ - if (attr->priority) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "Not support group."); - return -rte_errno; - } - - return 0; -} - static int i40e_get_outer_vlan(struct rte_eth_dev *dev, uint16_t *tpid) { @@ -1445,7 +1389,6 @@ i40e_flow_parse_ethertype_action(struct rte_eth_dev *dev, static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -1464,10 +1407,6 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_ETHERTYPE; return ret; @@ -2539,7 +2478,6 @@ i40e_flow_parse_fdir_action(struct rte_eth_dev *dev, static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -2556,10 +2494,6 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_FDIR; return 0; @@ -2824,7 +2758,6 @@ i40e_flow_parse_l4_pattern(const struct rte_flow_item *pattern, static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -2841,10 +2774,6 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3075,7 +3004,6 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3093,10 +3021,6 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3326,7 +3250,6 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3344,10 +3267,6 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3482,7 +3401,6 @@ i40e_flow_parse_mpls_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3500,10 +3418,6 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3634,7 +3548,6 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev, static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3652,10 +3565,6 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3751,7 +3660,6 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3769,10 +3677,6 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3791,7 +3695,12 @@ i40e_flow_check(struct rte_eth_dev *dev, uint32_t item_num = 0; /* non-void item number of pattern*/ uint32_t i = 0; bool flag = false; - int ret = I40E_NOT_SUPPORTED; + int ret; + + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) { + return ret; + } if (!pattern) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, @@ -3806,22 +3715,11 @@ i40e_flow_check(struct rte_eth_dev *dev, return -rte_errno; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } - /* Get the non-void item of action */ while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID) i++; if ((actions + i)->type == RTE_FLOW_ACTION_TYPE_RSS) { - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter_ctx->type = RTE_ETH_FILTER_HASH; return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error); } @@ -3846,6 +3744,7 @@ i40e_flow_check(struct rte_eth_dev *dev, i40e_pattern_skip_void_item(items, pattern); i = 0; + ret = I40E_NOT_SUPPORTED; do { parse_filter = i40e_find_parse_filter_func(items, &i); if (!parse_filter && !flag) { @@ -3858,7 +3757,7 @@ i40e_flow_check(struct rte_eth_dev *dev, } if (parse_filter) - ret = parse_filter(dev, attr, items, actions, error, filter_ctx); + ret = parse_filter(dev, items, actions, error, filter_ctx); flag = true; } while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns))); -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 11/25] net/i40e: refactor RSS flow parameter checks 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (9 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 10/25] net/i40e: use common flow attribute checks Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 12/25] net/i40e: use common action checks for ethertype Anatoly Burakov ` (15 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson Currently, the hash parser parameter checks are somewhat confusing as they have multiple mutually exclusive code paths and requirements, and it's difficult to reason about them because RSS flow parsing is interspersed with validation checks. To address that, refactor hash engine error checking to perform almost all validation at the beginning, with only happy paths being implemented in actual parsing functions. Some parameter combinations that were previously ignored (and perhaps produced a warning) are now explicitly rejected: - if no pattern is specified, RSS types are rejected - for queue lists and regions, RSS key is rejected Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 13 +- drivers/net/intel/i40e/i40e_hash.c | 427 +++++++++++++++++------------ drivers/net/intel/i40e/i40e_hash.h | 2 +- 3 files changed, 264 insertions(+), 178 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 05abaf8ce3..ad80883f13 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -3701,6 +3701,7 @@ i40e_flow_check(struct rte_eth_dev *dev, if (ret) { return ret; } + /* action and pattern validation will happen in each respective engine */ if (!pattern) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, @@ -3715,13 +3716,11 @@ i40e_flow_check(struct rte_eth_dev *dev, return -rte_errno; } - /* Get the non-void item of action */ - while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID) - i++; - - if ((actions + i)->type == RTE_FLOW_ACTION_TYPE_RSS) { - filter_ctx->type = RTE_ETH_FILTER_HASH; - return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error); + /* try parsing as RSS */ + filter_ctx->type = RTE_ETH_FILTER_HASH; + ret = i40e_hash_parse(dev, pattern, actions, &filter_ctx->rss_conf, error); + if (!ret) { + return ret; } i = 0; diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c index cbb377295d..c6b48a9fc1 100644 --- a/drivers/net/intel/i40e/i40e_hash.c +++ b/drivers/net/intel/i40e/i40e_hash.c @@ -16,6 +16,8 @@ #include "i40e_ethdev.h" #include "i40e_hash.h" +#include "../common/flow_check.h" + #ifndef BIT #define BIT(n) (1UL << (n)) #endif @@ -925,12 +927,7 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act, { const uint8_t *key = rss_act->key; - if (!key || rss_act->key_len != sizeof(rss_conf->key)) { - if (rss_act->key_len != sizeof(rss_conf->key)) - PMD_DRV_LOG(WARNING, - "RSS key length invalid, must be %u bytes, now set key to default", - (uint32_t)sizeof(rss_conf->key)); - + if (key == NULL) { memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key)); } else { memcpy(rss_conf->key, key, sizeof(rss_conf->key)); @@ -941,45 +938,29 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act, } static int -i40e_hash_parse_queues(const struct rte_eth_dev *dev, - const struct rte_flow_action_rss *rss_act, - struct i40e_rte_flow_rss_conf *rss_conf, - struct rte_flow_error *error) +i40e_hash_parse_pattern_act(const struct rte_eth_dev *dev, + const struct rte_flow_item pattern[], + const struct rte_flow_action_rss *rss_act, + struct i40e_rte_flow_rss_conf *rss_conf, + struct rte_flow_error *error) { - struct i40e_pf *pf; - struct i40e_hw *hw; - uint16_t i; - size_t max_queue; - - hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if (!rss_act->queue_num || - rss_act->queue_num > hw->func_caps.rss_table_size) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Invalid RSS queue number"); + rss_conf->symmetric_enable = rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ; if (rss_act->key_len) - PMD_DRV_LOG(WARNING, - "RSS key is ignored when queues specified"); + i40e_hash_parse_key(rss_act, rss_conf); - pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) - max_queue = i40e_pf_calc_configured_queues_num(pf); - else - max_queue = pf->dev_data->nb_rx_queues; + rss_conf->conf.func = rss_act->func; + rss_conf->conf.types = rss_act->types; + rss_conf->inset = i40e_hash_get_inset(rss_act->types, rss_conf->symmetric_enable); - max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC); - - for (i = 0; i < rss_act->queue_num; i++) { - if (rss_act->queue[i] >= max_queue) - break; - } - - if (i < rss_act->queue_num) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Invalid RSS queues"); + return i40e_hash_get_pattern_pctypes(dev, pattern, rss_act, + rss_conf, error); +} +static int +i40e_hash_parse_queues(const struct rte_flow_action_rss *rss_act, + struct i40e_rte_flow_rss_conf *rss_conf) +{ memcpy(rss_conf->queue, rss_act->queue, rss_act->queue_num * sizeof(rss_conf->queue[0])); rss_conf->conf.queue = rss_conf->queue; @@ -988,113 +969,38 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev, } static int -i40e_hash_parse_queue_region(const struct rte_eth_dev *dev, - const struct rte_flow_item pattern[], +i40e_hash_parse_queue_region(const struct rte_flow_item pattern[], const struct rte_flow_action_rss *rss_act, struct i40e_rte_flow_rss_conf *rss_conf, struct rte_flow_error *error) { - struct i40e_pf *pf; const struct rte_flow_item_vlan *vlan_spec, *vlan_mask; - uint64_t hash_queues; - uint32_t i; - - if (pattern[1].type != RTE_FLOW_ITEM_TYPE_END) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - &pattern[1], - "Pattern not supported."); vlan_spec = pattern->spec; vlan_mask = pattern->mask; - if (!vlan_spec || !vlan_mask || - (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, pattern, - "Pattern error."); - if (!rss_act->queue) + /* VLAN must have spec and mask */ + if (vlan_spec == NULL || vlan_mask == NULL) { return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Queues not specified"); - - if (rss_act->key_len) - PMD_DRV_LOG(WARNING, - "RSS key is ignored when configure queue region"); + RTE_FLOW_ERROR_TYPE_ITEM, &pattern[0], + "VLAN pattern spec and mask required"); + } + /* for mask, VLAN/TCI must be masked appropriately */ + if ((rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 0x7) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, &pattern[0], + "VLAN pattern mask invalid"); + } /* Use a 64 bit variable to represent all queues in a region. */ RTE_BUILD_BUG_ON(I40E_MAX_Q_PER_TC > 64); - if (!rss_act->queue_num || - !rte_is_power_of_2(rss_act->queue_num) || - rss_act->queue_num + rss_act->queue[0] > I40E_MAX_Q_PER_TC) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Queue number error"); - - for (i = 1; i < rss_act->queue_num; i++) { - if (rss_act->queue[i - 1] + 1 != rss_act->queue[i]) - break; - } - - if (i < rss_act->queue_num) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "Queues must be incremented continuously"); - - /* Map all queues to bits of uint64_t */ - hash_queues = (BIT_ULL(rss_act->queue[0] + rss_act->queue_num) - 1) & - ~(BIT_ULL(rss_act->queue[0]) - 1); - - pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - if (hash_queues & ~pf->hash_enabled_queues) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Some queues are not in LUT"); - rss_conf->region_queue_num = (uint8_t)rss_act->queue_num; rss_conf->region_queue_start = rss_act->queue[0]; rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13; return 0; } -static int -i40e_hash_parse_global_conf(const struct rte_eth_dev *dev, - const struct rte_flow_item pattern[], - const struct rte_flow_action_rss *rss_act, - struct i40e_rte_flow_rss_conf *rss_conf, - struct rte_flow_error *error) -{ - if (rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "Symmetric function should be set with pattern types"); - - rss_conf->conf.func = rss_act->func; - - if (rss_act->types) - PMD_DRV_LOG(WARNING, - "RSS types are ignored when no pattern specified"); - - if (pattern[0].type == RTE_FLOW_ITEM_TYPE_VLAN) - return i40e_hash_parse_queue_region(dev, pattern, rss_act, - rss_conf, error); - - if (rss_act->queue) - return i40e_hash_parse_queues(dev, rss_act, rss_conf, error); - - if (rss_act->key_len) { - i40e_hash_parse_key(rss_act, rss_conf); - return 0; - } - - if (rss_act->func == RTE_ETH_HASH_FUNCTION_DEFAULT) - PMD_DRV_LOG(WARNING, "Nothing change"); - return 0; -} - static bool i40e_hash_validate_rss_types(uint64_t rss_types) { @@ -1124,83 +1030,264 @@ i40e_hash_validate_rss_types(uint64_t rss_types) } static int -i40e_hash_parse_pattern_act(const struct rte_eth_dev *dev, - const struct rte_flow_item pattern[], - const struct rte_flow_action_rss *rss_act, - struct i40e_rte_flow_rss_conf *rss_conf, - struct rte_flow_error *error) +i40e_hash_validate_rss_pattern(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) { - if (rss_act->queue) + const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf; + + /* queue list is not supported */ + if (rss_act->queue_num == 0) { return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "RSS Queues not supported when pattern specified"); - rss_conf->symmetric_enable = false; /* by default, symmetric is disabled */ + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queues not supported when pattern specified"); + } + /* disallow unsupported hash functions */ switch (rss_act->func) { case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ: - rss_conf->symmetric_enable = true; - break; case RTE_ETH_HASH_FUNCTION_DEFAULT: case RTE_ETH_HASH_FUNCTION_TOEPLITZ: case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR: break; default: return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "RSS hash function not supported " - "when pattern specified"); + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS hash function not supported when pattern specified"); } if (!i40e_hash_validate_rss_types(rss_act->types)) return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "RSS types are invalid"); + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss_act, "RSS types are invalid"); - if (rss_act->key_len) - i40e_hash_parse_key(rss_act, rss_conf); + /* check RSS key length if it is specified */ + if (rss_act->key_len != 0 && rss_act->key_len != I40E_RSS_KEY_LEN) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key length must be 52 bytes"); + } - rss_conf->conf.func = rss_act->func; - rss_conf->conf.types = rss_act->types; - rss_conf->inset = i40e_hash_get_inset(rss_act->types, rss_conf->symmetric_enable); + return 0; +} - return i40e_hash_get_pattern_pctypes(dev, pattern, rss_act, - rss_conf, error); +static int +i40e_hash_validate_rss_common(const struct rte_flow_action_rss *rss_act, + struct rte_flow_error *error) +{ + /* for empty patterns, symmetric toeplitz is not supported */ + if (rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Symmetric hash function not supported without specific patterns"); + } + + /* hash types are not supported for global RSS configuration */ + if (rss_act->types != 0) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS types not supported without a pattern"); + } + + /* check RSS key length if it is specified */ + if (rss_act->key_len != 0 && rss_act->key_len != I40E_RSS_KEY_LEN) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key length must be 52 bytes"); + } + + return 0; +} + +static int +i40e_hash_validate_queue_region(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf; + struct rte_eth_dev *dev = param->driver_ctx; + const struct i40e_pf *pf; + uint64_t hash_queues; + + if (i40e_hash_validate_rss_common(rss_act, error)) + return -rte_errno; + + RTE_BUILD_BUG_ON(sizeof(hash_queues) != sizeof(pf->hash_enabled_queues)); + + /* having RSS key is not supported */ + if (rss_act->key != NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key not supported"); + } + + /* queue region must be specified */ + if (rss_act->queue_num == 0) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queues missing"); + } + + /* queue region must be power of two */ + if (!rte_is_power_of_2(rss_act->queue_num)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queue number must be power of two"); + } + + /* generic checks already filtered out discontiguous/non-unique RSS queues */ + + /* queues must not exceed maximum queues per traffic class */ + if (rss_act->queue[rss_act->queue_num - 1] >= I40E_MAX_Q_PER_TC) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS queue index"); + } + + /* queues must be in LUT */ + pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + hash_queues = (BIT_ULL(rss_act->queue[0] + rss_act->queue_num) - 1) & + ~(BIT_ULL(rss_act->queue[0]) - 1); + + if (hash_queues & ~pf->hash_enabled_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss_act, "Some queues are not in LUT"); + } + + return 0; +} + +static int +i40e_hash_validate_queue_list(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf; + struct rte_eth_dev *dev = param->driver_ctx; + struct i40e_pf *pf; + struct i40e_hw *hw; + size_t max_queue; + bool has_queue, has_key; + + if (i40e_hash_validate_rss_common(rss_act, error)) + return -rte_errno; + + has_queue = rss_act->queue != NULL; + has_key = rss_act->key != NULL; + + /* if we have queues, we must not have key */ + if (has_queue && has_key) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key for queue region is not supported"); + } + + /* if there are no queues, no further checks needed */ + if (!has_queue) + return 0; + + /* check queue number limits */ + hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + if (rss_act->queue_num > hw->func_caps.rss_table_size) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss_act, "Too many RSS queues"); + } + + pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) + max_queue = i40e_pf_calc_configured_queues_num(pf); + else + max_queue = pf->dev_data->nb_rx_queues; + + max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC); + + /* we know RSS queues are contiguous so we only need to check last queue */ + if (rss_act->queue[rss_act->queue_num - 1] >= max_queue) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS queue"); + } + + return 0; } int -i40e_hash_parse(const struct rte_eth_dev *dev, +i40e_hash_parse(struct rte_eth_dev *dev, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct i40e_rte_flow_rss_conf *rss_conf, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .driver_ctx = dev + /* each pattern type will add specific check function */ + }; const struct rte_flow_action_rss *rss_act; + int ret; - if (actions[1].type != RTE_FLOW_ACTION_TYPE_END) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - &actions[1], - "Only support one action for RSS."); - - rss_act = (const struct rte_flow_action_rss *)actions[0].conf; - if (rss_act->level) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - actions, - "RSS level is not supported"); + /* + * We have two possible paths: global RSS configuration, and an RSS pattern action. + * + * For global patterns, we act on two types of flows: + * - Empty pattern ([END]) + * - VLAN pattern ([VLAN] -> [END]) + * + * Everything else is handled by pattern action parser. + */ + bool is_empty, is_vlan; while (pattern->type == RTE_FLOW_ITEM_TYPE_VOID) pattern++; - if (pattern[0].type == RTE_FLOW_ITEM_TYPE_END || - pattern[0].type == RTE_FLOW_ITEM_TYPE_VLAN) - return i40e_hash_parse_global_conf(dev, pattern, rss_act, - rss_conf, error); + is_empty = pattern[0].type == RTE_FLOW_ITEM_TYPE_END; + is_vlan = pattern[0].type == RTE_FLOW_ITEM_TYPE_VLAN && + pattern[1].type == RTE_FLOW_ITEM_TYPE_END; - return i40e_hash_parse_pattern_act(dev, pattern, rss_act, - rss_conf, error); + /* VLAN path */ + if (is_vlan) { + ac_param.check = i40e_hash_validate_queue_region; + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + rss_act = parsed_actions.actions[0]->conf; + /* set up RSS functions */ + rss_conf->conf.func = rss_act->func; + return i40e_hash_parse_queue_region(pattern, rss_act, rss_conf, error); + } + /* Empty pattern path */ + if (is_empty) { + ac_param.check = i40e_hash_validate_queue_list; + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + rss_act = parsed_actions.actions[0]->conf; + rss_conf->conf.func = rss_act->func; + /* if there is a queue list, take that path */ + if (rss_act->queue != NULL) { + return i40e_hash_parse_queues(rss_act, rss_conf); + } + /* otherwise just parse RSS key */ + if (rss_act->key != NULL) { + i40e_hash_parse_key(rss_act, rss_conf); + } + return 0; + } + ac_param.check = i40e_hash_validate_rss_pattern; + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + rss_act = parsed_actions.actions[0]->conf; + + /* pattern case */ + return i40e_hash_parse_pattern_act(dev, pattern, rss_act, rss_conf, error); } static void diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h index 2513d84565..99df4bccd0 100644 --- a/drivers/net/intel/i40e/i40e_hash.h +++ b/drivers/net/intel/i40e/i40e_hash.h @@ -13,7 +13,7 @@ extern "C" { #endif -int i40e_hash_parse(const struct rte_eth_dev *dev, +int i40e_hash_parse(struct rte_eth_dev *dev, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct i40e_rte_flow_rss_conf *rss_conf, -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 12/25] net/i40e: use common action checks for ethertype 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (10 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 11/25] net/i40e: refactor RSS flow parameter checks Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 13/25] net/i40e: use common action checks for FDIR Anatoly Burakov ` (14 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson Use the common flow action checking parsing infrastructure for checking flow actions for ethertype filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 56 +++++++++++++----------------- 1 file changed, 24 insertions(+), 32 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index ad80883f13..0e9880e9ce 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -1347,43 +1347,35 @@ i40e_flow_parse_ethertype_action(struct rte_eth_dev *dev, struct rte_flow_error *error, struct rte_eth_ethertype_filter *filter) { - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - const struct rte_flow_action *act; - const struct rte_flow_action_queue *act_q; - uint32_t index = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END, + }, + .max_actions = 1, + }; + const struct rte_flow_action *action; + int ret; - /* Check if the first non-void action is QUEUE or DROP. */ - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE && - act->type != RTE_FLOW_ACTION_TYPE_DROP) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + action = parsed_actions.actions[0]; - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = act->conf; + if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *act_q = action->conf; + /* check queue index */ + if (act_q->index >= dev->data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid queue index"); + } filter->queue = act_q->index; - if (filter->queue >= pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid queue ID for" - " ethertype_filter."); - return -rte_errno; - } - } else { + } else if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { filter->flags |= RTE_ETHTYPE_FLAGS_DROP; } - - /* Check if the next non-void item is END */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 13/25] net/i40e: use common action checks for FDIR 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (11 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 12/25] net/i40e: use common action checks for ethertype Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 14/25] net/i40e: use common action checks for tunnel Anatoly Burakov ` (13 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson Use the common flow action checking parsing infrastructure for checking flow actions for flow director filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 139 ++++++++++++++++------------- 1 file changed, 76 insertions(+), 63 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 0e9880e9ce..d8c8654cfa 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -2371,28 +2371,49 @@ i40e_flow_parse_fdir_action(struct rte_eth_dev *dev, struct i40e_fdir_filter_conf *filter) { struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - const struct rte_flow_action *act; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark_spec = NULL; - uint32_t index = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_PASSTHRU, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_FLAG, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + }; + const struct rte_flow_action *first, *second; + int ret; - /* Check if the first non-void action is QUEUE or DROP or PASSTHRU. */ - NEXT_ITEM_OF_ACTION(act, actions, index); - switch (act->type) { + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + first = parsed_actions.actions[0]; + /* can be NULL */ + second = parsed_actions.actions[1]; + + switch (first->type) { case RTE_FLOW_ACTION_TYPE_QUEUE: - act_q = act->conf; + { + const struct rte_flow_action_queue *act_q = first->conf; + /* check against PF constraints */ + if (!filter->input.flow_ext.is_vf && act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid queue ID for FDIR"); + } + /* check against VF constraints */ + if (filter->input.flow_ext.is_vf && act_q->index >= pf->vf_nb_qps) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid queue ID for FDIR"); + } filter->action.rx_queue = act_q->index; - if ((!filter->input.flow_ext.is_vf && - filter->action.rx_queue >= pf->dev_data->nb_rx_queues) || - (filter->input.flow_ext.is_vf && - filter->action.rx_queue >= pf->vf_nb_qps)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid queue ID for FDIR."); - return -rte_errno; - } filter->action.behavior = I40E_FDIR_ACCEPT; break; + } case RTE_FLOW_ACTION_TYPE_DROP: filter->action.behavior = I40E_FDIR_REJECT; break; @@ -2400,69 +2421,61 @@ i40e_flow_parse_fdir_action(struct rte_eth_dev *dev, filter->action.behavior = I40E_FDIR_PASSTHRU; break; case RTE_FLOW_ACTION_TYPE_MARK: + { + const struct rte_flow_action_mark *act_m = first->conf; filter->action.behavior = I40E_FDIR_PASSTHRU; - mark_spec = act->conf; filter->action.report_status = I40E_FDIR_REPORT_ID; - filter->soft_id = mark_spec->id; - break; + filter->soft_id = act_m->id; + break; + } default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid first action for FDIR"); } - /* Check if the next non-void item is MARK or FLAG or END. */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - switch (act->type) { + /* do we have another? */ + if (second == NULL) + return 0; + + switch (second->type) { case RTE_FLOW_ACTION_TYPE_MARK: - if (mark_spec) { - /* Double MARK actions requested */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + { + const struct rte_flow_action_mark *act_m = second->conf; + /* only one mark action can be specified */ + if (first->type == RTE_FLOW_ACTION_TYPE_MARK) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } - mark_spec = act->conf; filter->action.report_status = I40E_FDIR_REPORT_ID; - filter->soft_id = mark_spec->id; + filter->soft_id = act_m->id; break; + } case RTE_FLOW_ACTION_TYPE_FLAG: - if (mark_spec) { - /* MARK + FLAG not supported */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + { + /* mark + flag is unsupported */ + if (first->type == RTE_FLOW_ACTION_TYPE_MARK) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } filter->action.report_status = I40E_FDIR_NO_REPORT_STATUS; break; + } case RTE_FLOW_ACTION_TYPE_RSS: - if (filter->action.behavior != I40E_FDIR_PASSTHRU) { - /* RSS filter won't be next if FDIR did not pass thru */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + /* RSS filter only can be after passthru or mark */ + if (first->type != RTE_FLOW_ACTION_TYPE_PASSTHRU && + first->type != RTE_FLOW_ACTION_TYPE_MARK) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } break; - case RTE_FLOW_ACTION_TYPE_END: - return 0; default: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid action."); - return -rte_errno; - } - - /* Check if the next non-void item is END */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid action."); - return -rte_errno; + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } return 0; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 14/25] net/i40e: use common action checks for tunnel 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (12 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 13/25] net/i40e: use common action checks for FDIR Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 15/25] net/iavf: use common flow attribute checks Anatoly Burakov ` (12 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson Use the common flow action checking parsing infrastructure for checking flow actions for flow director tunnel filter. As a result, more stringent checks are performed against parametersm specifically the following: - reject NULL conf for VF action (instead of attempt at NULL dereference) - second action was expected to be QUEUE but if it wasn't it was ignored Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 104 +++++++++++++---------------- 1 file changed, 48 insertions(+), 56 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index d8c8654cfa..dd3bf5822a 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -1103,15 +1103,6 @@ static struct i40e_valid_pattern i40e_supported_patterns[] = { { pattern_fdir_ipv6_sctp, i40e_flow_parse_l4_cloud_filter }, }; -#define NEXT_ITEM_OF_ACTION(act, actions, index) \ - do { \ - act = actions + index; \ - while (act->type == RTE_FLOW_ACTION_TYPE_VOID) { \ - index++; \ - act = actions + index; \ - } \ - } while (0) - /* Find the first VOID or non-VOID item pointer */ static const struct rte_flow_item * i40e_find_first_item(const struct rte_flow_item *item, bool is_void) @@ -2514,61 +2505,62 @@ i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev, struct i40e_tunnel_filter_conf *filter) { struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - const struct rte_flow_action *act; const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_vf *act_vf; - uint32_t index = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_PF, + RTE_FLOW_ACTION_TYPE_VF, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + }; + const struct rte_flow_action *first, *second; + int ret; - /* Check if the first non-void action is PF or VF. */ - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_PF && - act->type != RTE_FLOW_ACTION_TYPE_VF) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + first = parsed_actions.actions[0]; + /* can be NULL */ + second = parsed_actions.actions[1]; - if (act->type == RTE_FLOW_ACTION_TYPE_VF) { - act_vf = act->conf; - filter->vf_id = act_vf->id; + /* first action must be PF or VF */ + if (first->type == RTE_FLOW_ACTION_TYPE_VF) { + const struct rte_flow_action_vf *vf = first->conf; + if (vf->id >= pf->vf_num) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid VF ID for tunnel filter"); + return -rte_errno; + } + filter->vf_id = vf->id; filter->is_to_vf = 1; - if (filter->vf_id >= pf->vf_num) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid VF ID for tunnel filter"); - return -rte_errno; - } + } else if (first->type != RTE_FLOW_ACTION_TYPE_PF) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Unsupported action"); } - /* Check if the next non-void item is QUEUE */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = act->conf; - filter->queue_id = act_q->index; - if ((!filter->is_to_vf) && - (filter->queue_id >= pf->dev_data->nb_rx_queues)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid queue ID for tunnel filter"); - return -rte_errno; - } else if (filter->is_to_vf && - (filter->queue_id >= pf->vf_nb_qps)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid queue ID for tunnel filter"); - return -rte_errno; - } - } + /* check if second action is QUEUE */ + if (second == NULL) + return 0; - /* Check if the next non-void item is END */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; + act_q = second->conf; + /* check queue ID for PF flow */ + if (!filter->is_to_vf && act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q, + "Invalid queue ID for tunnel filter"); + } + /* check queue ID for VF flow */ + if (filter->is_to_vf && act_q->index >= pf->vf_nb_qps) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q, + "Invalid queue ID for tunnel filter"); } + filter->queue_id = act_q->index; return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 15/25] net/iavf: use common flow attribute checks 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (13 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 14/25] net/i40e: use common action checks for tunnel Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 16/25] net/iavf: use common action checks for IPsec Anatoly Burakov ` (11 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Replace custom attr checks with a call to common checks. Flow subscription engine supports priority but other engines don't, so we move the attribute checks into the engines. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_fdir.c | 8 ++- drivers/net/intel/iavf/iavf_fsub.c | 20 ++++++- drivers/net/intel/iavf/iavf_generic_flow.c | 67 +++------------------- drivers/net/intel/iavf/iavf_generic_flow.h | 2 +- drivers/net/intel/iavf/iavf_hash.c | 10 ++-- drivers/net/intel/iavf/iavf_ipsec_crypto.c | 8 ++- 6 files changed, 43 insertions(+), 72 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c index 9eae874800..7dce5086cf 100644 --- a/drivers/net/intel/iavf/iavf_fdir.c +++ b/drivers/net/intel/iavf/iavf_fdir.c @@ -17,6 +17,7 @@ #include "iavf.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #include "virtchnl.h" #include "iavf_rxtx.h" @@ -1592,7 +1593,7 @@ iavf_fdir_parse(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1603,8 +1604,9 @@ iavf_fdir_parse(struct iavf_adapter *ad, memset(filter, 0, sizeof(*filter)); - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (!item) diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c index bfb34695de..010c1d5a44 100644 --- a/drivers/net/intel/iavf/iavf_fsub.c +++ b/drivers/net/intel/iavf/iavf_fsub.c @@ -20,6 +20,7 @@ #include <rte_flow.h> #include <iavf.h> #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #define MAX_QGRP_NUM_TYPE 7 #define IAVF_IPV6_ADDR_LENGTH 16 @@ -725,12 +726,15 @@ iavf_fsub_parse(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { struct iavf_fsub_conf *filter; struct iavf_pattern_match_item *pattern_match_item = NULL; + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; int ret = 0; filter = rte_zmalloc(NULL, sizeof(*filter), 0); @@ -741,6 +745,18 @@ iavf_fsub_parse(struct iavf_adapter *ad, return -ENOMEM; } + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + goto error; + + if (attr->priority > 1) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Only support priority 0 and 1."); + ret = -rte_errno; + goto error; + } + /* search flow subscribe pattern */ pattern_match_item = iavf_search_pattern_match_item(pattern, array, array_len, error); @@ -762,7 +778,7 @@ iavf_fsub_parse(struct iavf_adapter *ad, goto error; /* parse flow subscribe pattern action */ - ret = iavf_fsub_parse_action((void *)ad, actions, priority, + ret = iavf_fsub_parse_action((void *)ad, actions, attr->priority, error, filter); error: diff --git a/drivers/net/intel/iavf/iavf_generic_flow.c b/drivers/net/intel/iavf/iavf_generic_flow.c index 42ecc90d1d..b8f6414b16 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.c +++ b/drivers/net/intel/iavf/iavf_generic_flow.c @@ -1785,7 +1785,7 @@ enum rte_flow_item_type iavf_pattern_eth_ipv6_udp_l2tpv2_ppp_ipv6_tcp[] = { typedef struct iavf_flow_engine * (*parse_engine_t)(struct iavf_adapter *ad, struct rte_flow *flow, struct iavf_parser_list *parser_list, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); @@ -1939,45 +1939,6 @@ iavf_unregister_parser(struct iavf_flow_parser *parser, } } -static int -iavf_flow_valid_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error) -{ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* support priority for flow subscribe */ - if (attr->priority > 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Only support priority 0 and 1."); - return -rte_errno; - } - - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "Not support group."); - return -rte_errno; - } - - return 0; -} - /* Find the first VOID or non-VOID item pointer */ static const struct rte_flow_item * iavf_find_first_item(const struct rte_flow_item *item, bool is_void) @@ -2106,7 +2067,7 @@ static struct iavf_flow_engine * iavf_parse_engine_create(struct iavf_adapter *ad, struct rte_flow *flow, struct iavf_parser_list *parser_list, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2120,7 +2081,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad, if (parser_node->parser->parse_pattern_action(ad, parser_node->parser->array, parser_node->parser->array_len, - pattern, actions, priority, &meta, error) < 0) + pattern, actions, attr, &meta, error) < 0) continue; engine = parser_node->parser->engine; @@ -2136,7 +2097,7 @@ static struct iavf_flow_engine * iavf_parse_engine_validate(struct iavf_adapter *ad, struct rte_flow *flow, struct iavf_parser_list *parser_list, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2150,7 +2111,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad, if (parser_node->parser->parse_pattern_action(ad, parser_node->parser->array, parser_node->parser->array_len, - pattern, actions, priority, &meta, error) < 0) + pattern, actions, attr, &meta, error) < 0) continue; engine = parser_node->parser->engine; @@ -2182,7 +2143,6 @@ iavf_flow_process_filter(struct rte_eth_dev *dev, parse_engine_t iavf_parse_engine, struct rte_flow_error *error) { - int ret = IAVF_ERR_CONFIG; struct iavf_adapter *ad = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); @@ -2200,29 +2160,18 @@ iavf_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } - - ret = iavf_flow_valid_attr(attr, error); - if (ret) - return ret; - *engine = iavf_parse_engine(ad, flow, &vf->rss_parser_list, - attr->priority, pattern, actions, error); + attr, pattern, actions, error); if (*engine) return 0; *engine = iavf_parse_engine(ad, flow, &vf->dist_parser_list, - attr->priority, pattern, actions, error); + attr, pattern, actions, error); if (*engine) return 0; *engine = iavf_parse_engine(ad, flow, &vf->ipsec_crypto_parser_list, - attr->priority, pattern, actions, error); + attr, pattern, actions, error); if (*engine) return 0; diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h index b97cf8b7ff..ddc554996d 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.h +++ b/drivers/net/intel/iavf/iavf_generic_flow.h @@ -471,7 +471,7 @@ typedef int (*parse_pattern_action_t)(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c index d864998402..b569450b82 100644 --- a/drivers/net/intel/iavf/iavf_hash.c +++ b/drivers/net/intel/iavf/iavf_hash.c @@ -23,6 +23,7 @@ #include "iavf.h" #include "iavf_generic_flow.h" #include "iavf_hash.h" +#include "../common/flow_check.h" #define IAVF_PHINT_NONE 0 #define IAVF_PHINT_GTPU BIT_ULL(0) @@ -87,7 +88,7 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); @@ -1520,7 +1521,7 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1529,8 +1530,9 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, uint64_t phint = IAVF_PHINT_NONE; int ret = 0; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c index 1a3004b0fc..5c47b3ac4b 100644 --- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c +++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c @@ -14,6 +14,7 @@ #include "iavf_rxtx.h" #include "iavf_log.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #include "iavf_ipsec_crypto.h" #include "iavf_ipsec_crypto_capabilities.h" @@ -1883,15 +1884,16 @@ iavf_ipsec_flow_parse(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { struct iavf_pattern_match_item *item = NULL; int ret = -1; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (item && item->meta) { -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 16/25] net/iavf: use common action checks for IPsec 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (14 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 15/25] net/iavf: use common flow attribute checks Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 17/25] net/iavf: use common action checks for hash Anatoly Burakov ` (10 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for IPsec filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_ipsec_crypto.c | 34 +++++++++------------- 1 file changed, 14 insertions(+), 20 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c index 5c47b3ac4b..8a14216716 100644 --- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c +++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c @@ -1677,26 +1677,12 @@ parse_udp_item(const struct rte_flow_item_udp *item, struct rte_udp_hdr *udp) udp->src_port = item->hdr.src_port; } -static int -has_security_action(const struct rte_flow_action actions[], - const void **session) -{ - /* only {SECURITY; END} supported */ - if (actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY && - actions[1].type == RTE_FLOW_ACTION_TYPE_END) { - *session = actions[0].conf; - return true; - } - return false; -} - static struct iavf_ipsec_flow_item * iavf_ipsec_flow_item_parse(struct rte_eth_dev *ethdev, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct rte_security_session *session, uint32_t type) { - const void *session; struct iavf_ipsec_flow_item *ipsec_flow = rte_malloc("security-flow-rule", sizeof(struct iavf_ipsec_flow_item), 0); @@ -1763,9 +1749,6 @@ iavf_ipsec_flow_item_parse(struct rte_eth_dev *ethdev, goto flow_cleanup; } - if (!has_security_action(actions, &session)) - goto flow_cleanup; - if (!iavf_ipsec_crypto_action_valid(ethdev, session, ipsec_flow->spi)) goto flow_cleanup; @@ -1888,6 +1871,14 @@ iavf_ipsec_flow_parse(struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_SECURITY, + RTE_FLOW_ACTION_TYPE_END, + }, + .max_actions = 1, + }; struct iavf_pattern_match_item *item = NULL; int ret = -1; @@ -1895,12 +1886,15 @@ iavf_ipsec_flow_parse(struct iavf_adapter *ad, if (ret) return ret; + if (ci_flow_check_actions(actions, ¶m, &parsed_actions, error) < 0) + return ret; + item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (item && item->meta) { + const struct rte_security_session *session = parsed_actions.actions[0]->conf; uint32_t type = (uint64_t)(item->meta); struct iavf_ipsec_flow_item *fi = - iavf_ipsec_flow_item_parse(ad->vf.eth_dev, - pattern, actions, type); + iavf_ipsec_flow_item_parse(ad->vf.eth_dev, pattern, session, type); if (fi && meta) { *meta = fi; ret = 0; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 17/25] net/iavf: use common action checks for hash 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (15 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 16/25] net/iavf: use common action checks for IPsec Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 18/25] net/iavf: use common action checks for FDIR Anatoly Burakov ` (9 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for hash filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_hash.c | 143 +++++++++++++++-------------- 1 file changed, 72 insertions(+), 71 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c index b569450b82..d8c9e900e8 100644 --- a/drivers/net/intel/iavf/iavf_hash.c +++ b/drivers/net/intel/iavf/iavf_hash.c @@ -1428,95 +1428,81 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func, } static int -iavf_hash_parse_action(struct iavf_pattern_match_item *match_item, - const struct rte_flow_action actions[], - uint64_t pattern_hint, struct iavf_rss_meta *rss_meta, - struct rte_flow_error *error) +iavf_hash_parse_rss_type(struct iavf_pattern_match_item *match_item, + const struct rte_flow_action_rss *rss, + uint64_t pattern_hint, struct iavf_rss_meta *rss_meta, + struct rte_flow_error *error) { - enum rte_flow_action_type action_type; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action; uint64_t rss_type; - /* Supported action is RSS. */ - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - rss = action->conf; - rss_type = rss->types; + rss_meta->rss_algorithm = rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ ? + VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC : + VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC; - if (rss->func == - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR){ - rss_meta->rss_algorithm = - VIRTCHNL_RSS_ALG_XOR_ASYMMETRIC; - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "function simple_xor is not supported"); - } else if (rss->func == - RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { - rss_meta->rss_algorithm = - VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC; - } else { - rss_meta->rss_algorithm = - VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC; - } + /* If pattern type is raw, no need to refine rss type */ + if (pattern_hint == IAVF_PHINT_RAW) + return 0; - if (rss->level) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS encapsulation level is not supported"); + /** + * Check simultaneous use of SRC_ONLY and DST_ONLY + * of the same level. + */ + rss_type = rte_eth_rss_hf_refine(rss->types); - if (rss->key_len) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS key_len is not supported"); + if (iavf_any_invalid_rss_type(rss->func, rss_type, match_item->input_set_mask)) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS type not supported"); + } - if (rss->queue_num) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a non-NULL RSS queue is not supported"); + memcpy(&rss_meta->proto_hdrs, match_item->meta, sizeof(struct virtchnl_proto_hdrs)); - /* If pattern type is raw, no need to refine rss type */ - if (pattern_hint == IAVF_PHINT_RAW) - break; + iavf_refine_proto_hdrs(&rss_meta->proto_hdrs, rss_type, pattern_hint); - /** - * Check simultaneous use of SRC_ONLY and DST_ONLY - * of the same level. - */ - rss_type = rte_eth_rss_hf_refine(rss_type); + return 0; +} - if (iavf_any_invalid_rss_type(rss->func, rss_type, - match_item->input_set_mask)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - action, "RSS type not supported"); +static int +iavf_hash_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss = actions->actions[0]->conf; - memcpy(&rss_meta->proto_hdrs, match_item->meta, - sizeof(struct virtchnl_proto_hdrs)); + /* filter out unsupported RSS functions */ + switch (rss->func) { + case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR: + case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Selected RSS hash function not supported"); + default: + break; + } - iavf_refine_proto_hdrs(&rss_meta->proto_hdrs, - rss_type, pattern_hint); - break; + if (rss->level != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Nonzero RSS encapsulation level is not supported"); + } - case RTE_FLOW_ACTION_TYPE_END: - break; + if (rss->key_len != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS key is not supported"); + } - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "Invalid action."); - return -rte_errno; - } + if (rss->queue_num != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS queue region is not supported"); } return 0; } static int -iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, +iavf_hash_parse_pattern_action(struct iavf_adapter *ad, struct iavf_pattern_match_item *array, uint32_t array_len, const struct rte_flow_item pattern[], @@ -1525,6 +1511,17 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .driver_ctx = ad, + .check = iavf_hash_parse_action_check, + }; + const struct rte_flow_action_rss *rss; struct iavf_pattern_match_item *pattern_match_item; struct iavf_rss_meta *rss_meta_ptr; uint64_t phint = IAVF_PHINT_NONE; @@ -1534,6 +1531,10 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, if (ret) return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { rte_flow_error_set(error, EINVAL, @@ -1566,8 +1567,8 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, } } - ret = iavf_hash_parse_action(pattern_match_item, actions, phint, - rss_meta_ptr, error); + rss = parsed_actions.actions[0]->conf; + ret = iavf_hash_parse_rss_type(pattern_match_item, rss, phint, rss_meta_ptr, error); error: if (!ret && meta) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 18/25] net/iavf: use common action checks for FDIR 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (16 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 17/25] net/iavf: use common action checks for hash Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 19/25] net/iavf: use common action checks for fsub Anatoly Burakov ` (8 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for FDIR filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_fdir.c | 359 +++++++++++++---------------- 1 file changed, 157 insertions(+), 202 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c index 7dce5086cf..1f17f8fa24 100644 --- a/drivers/net/intel/iavf/iavf_fdir.c +++ b/drivers/net/intel/iavf/iavf_fdir.c @@ -441,204 +441,6 @@ static struct iavf_flow_engine iavf_fdir_engine = { .type = IAVF_FLOW_ENGINE_FDIR, }; -static int -iavf_fdir_parse_action_qregion(struct iavf_adapter *ad, - struct rte_flow_error *error, - const struct rte_flow_action *act, - struct virtchnl_filter_action *filter_action) -{ - struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); - const struct rte_flow_action_rss *rss = act->conf; - uint32_t i; - - if (act->type != RTE_FLOW_ACTION_TYPE_RSS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; - } - - if (rss->queue_num <= 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Queue region size can't be 0 or 1."); - return -rte_errno; - } - - /* check if queue index for queue region is continuous */ - for (i = 0; i < rss->queue_num - 1; i++) { - if (rss->queue[i + 1] != rss->queue[i] + 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Discontinuous queue region"); - return -rte_errno; - } - } - - if (rss->queue[rss->queue_num - 1] >= ad->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid queue region indexes."); - return -rte_errno; - } - - if (!(rte_is_power_of_2(rss->queue_num) && - rss->queue_num <= IAVF_FDIR_MAX_QREGION_SIZE)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "The region size should be any of the following values:" - "1, 2, 4, 8, 16, 32, 64, 128 as long as the total number " - "of queues do not exceed the VSI allocation."); - return -rte_errno; - } - - if (rss->queue_num > vf->max_rss_qregion) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "The region size cannot be large than the supported max RSS queue region"); - return -rte_errno; - } - - filter_action->act_conf.queue.index = rss->queue[0]; - filter_action->act_conf.queue.region = rte_fls_u32(rss->queue_num) - 1; - - return 0; -} - -static int -iavf_fdir_parse_action(struct iavf_adapter *ad, - const struct rte_flow_action actions[], - struct rte_flow_error *error, - struct iavf_fdir_conf *filter) -{ - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark_spec = NULL; - uint32_t dest_num = 0; - uint32_t mark_num = 0; - int ret; - - int number = 0; - struct virtchnl_filter_action *filter_action; - - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - - case RTE_FLOW_ACTION_TYPE_PASSTHRU: - dest_num++; - - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_PASSTHRU; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_DROP: - dest_num++; - - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_DROP; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - - act_q = actions->conf; - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_QUEUE; - filter_action->act_conf.queue.index = act_q->index; - - if (filter_action->act_conf.queue.index >= - ad->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid queue for FDIR."); - return -rte_errno; - } - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_RSS: - dest_num++; - - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_Q_REGION; - - ret = iavf_fdir_parse_action_qregion(ad, - error, actions, filter_action); - if (ret) - return ret; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_MARK: - mark_num++; - - filter->mark_flag = 1; - mark_spec = actions->conf; - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_MARK; - filter_action->act_conf.mark_id = mark_spec->id; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action."); - return -rte_errno; - } - } - - if (number > VIRTCHNL_MAX_NUM_ACTIONS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Action numbers exceed the maximum value"); - return -rte_errno; - } - - if (dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - - if (mark_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Too many mark actions"); - return -rte_errno; - } - - if (dest_num + mark_num == 0) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Empty action"); - return -rte_errno; - } - - /* Mark only is equal to mark + passthru. */ - if (dest_num == 0) { - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - filter_action->type = VIRTCHNL_ACTION_PASSTHRU; - filter->add_fltr.rule_cfg.action_set.count = ++number; - } - - return 0; -} - static bool iavf_fdir_refine_input_set(const uint64_t input_set, const uint64_t input_set_mask, @@ -1587,6 +1389,145 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, return 0; } +static int +iavf_fdir_action_check_qregion(struct iavf_adapter *ad, + const struct rte_flow_action_rss *rss, + struct rte_flow_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); + + if (rss->queue_num <= 1) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Queue region size can't be 0 or 1."); + } + + if (rss->queue[rss->queue_num - 1] >= ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Invalid queue region indexes."); + } + + if (!(rte_is_power_of_2(rss->queue_num) && + rss->queue_num <= IAVF_FDIR_MAX_QREGION_SIZE)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "The region size should be any of the following values:" + "1, 2, 4, 8, 16, 32, 64, 128 as long as the total number " + "of queues do not exceed the VSI allocation."); + } + + if (rss->queue_num > vf->max_rss_qregion) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "The region size cannot be large than the supported max RSS queue region"); + } + + return 0; +} + +static int +iavf_fdir_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct iavf_adapter *ad = param->driver_ctx; + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); + struct iavf_fdir_conf *filter = &vf->fdir.conf; + uint32_t dest_num = 0, mark_num = 0; + size_t i, number = 0; + bool has_drop = false; + int ret; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *act = actions->actions[i]; + struct virtchnl_filter_action *filter_action = + &filter->add_fltr.rule_cfg.action_set.actions[number]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_PASSTHRU: + dest_num++; + + filter_action->type = VIRTCHNL_ACTION_PASSTHRU; + break; + case RTE_FLOW_ACTION_TYPE_DROP: + dest_num++; + has_drop = true; + + filter_action->type = VIRTCHNL_ACTION_DROP; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q; + dest_num++; + + act_q = act->conf; + + filter_action->type = VIRTCHNL_ACTION_QUEUE; + + if (act_q->index >= ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Invalid queue index."); + } + filter_action->act_conf.queue.index = act_q->index; + + break; + } + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = act->conf; + dest_num++; + + filter_action->type = VIRTCHNL_ACTION_Q_REGION; + + ret = iavf_fdir_action_check_qregion(ad, rss, error); + if (ret) + return ret; + + filter_action->act_conf.queue.index = rss->queue[0]; + filter_action->act_conf.queue.region = rte_fls_u32(rss->queue_num) - 1; + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + const struct rte_flow_action_mark *mark_spec; + mark_num++; + + filter->mark_flag = 1; + mark_spec = act->conf; + + filter_action->type = VIRTCHNL_ACTION_MARK; + filter_action->act_conf.mark_id = mark_spec->id; + + break; + } + default: + /* cannot happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Invalid action."); + } + filter->add_fltr.rule_cfg.action_set.count = ++number; + } + + if (dest_num > 1 || mark_num > 1 || (has_drop && mark_num > 1)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Unsupported action combination"); + } + + /* Mark only is equal to mark + passthru. */ + if (dest_num == 0) { + struct virtchnl_filter_action *filter_action = + &filter->add_fltr.rule_cfg.action_set.actions[number]; + filter_action->type = VIRTCHNL_ACTION_PASSTHRU; + filter->add_fltr.rule_cfg.action_set.count = ++number; + } + + return 0; +} + static int iavf_fdir_parse(struct iavf_adapter *ad, struct iavf_pattern_match_item *array, @@ -1597,6 +1538,20 @@ iavf_fdir_parse(struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_PASSTHRU, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + .check = iavf_fdir_parse_action_check, + .driver_ctx = ad + }; struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); struct iavf_fdir_conf *filter = &vf->fdir.conf; struct iavf_pattern_match_item *item = NULL; @@ -1608,6 +1563,10 @@ iavf_fdir_parse(struct iavf_adapter *ad, if (ret) return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (!item) return -rte_errno; @@ -1617,10 +1576,6 @@ iavf_fdir_parse(struct iavf_adapter *ad, if (ret) goto error; - ret = iavf_fdir_parse_action(ad, actions, error, filter); - if (ret) - goto error; - if (meta) *meta = filter; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 19/25] net/iavf: use common action checks for fsub 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (17 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 18/25] net/iavf: use common action checks for FDIR Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 20/25] net/iavf: use common action checks for flow query Anatoly Burakov ` (7 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for flow subscription filter. Existing implementation had a couple issues that do not rise to the level of being bugs, but are still questionable design choices. For one, DROP action is supported in actions check (single actions are allowed as long as it isn't RSS or QUEUE) but is later disallowed in action parse (because not having port representor action is treated as an error). This is fixed by removing DROP action support from the check stage. For another, PORT_REPRESENTOR action was incrementing action counter but not writing anything into action array, which, given that action list is zero-initialized, meant that the default action (drop) was kept in the action list. However, because the actual PF treats drop as a no-op, nothing bad happened when a DROP action was added to the list of actions. However, nothing bad also happens if we just didn't have an action to begin with, so we remedy these unorthodox semantics by accordingly treating PORT_REPRESENTOR action as a noop, and not adding anything to the action list. As a final note, now that all filter parsing code paths use the common action check infrastructure, we can remove the NULL check for actions from the beginning of the parsing path, as this is now handled by each engine. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_fsub.c | 248 +++++++++------------ drivers/net/intel/iavf/iavf_generic_flow.c | 7 - 2 files changed, 105 insertions(+), 150 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c index 010c1d5a44..4131937cc7 100644 --- a/drivers/net/intel/iavf/iavf_fsub.c +++ b/drivers/net/intel/iavf/iavf_fsub.c @@ -540,89 +540,46 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[], } static int -iavf_fsub_parse_action(struct iavf_adapter *ad, - const struct rte_flow_action *actions, +iavf_fsub_parse_action(const struct ci_flow_actions *actions, uint32_t priority, struct rte_flow_error *error, struct iavf_fsub_conf *filter) { - const struct rte_flow_action *action; - const struct rte_flow_action_ethdev *act_ethdev; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_rss *act_qgrop; - struct virtchnl_filter_action *filter_action; - uint16_t valid_qgrop_number[MAX_QGRP_NUM_TYPE] = { - 2, 4, 8, 16, 32, 64, 128}; - uint16_t i, num = 0, dest_num = 0, vf_num = 0; - uint16_t rule_port_id; + uint16_t num_actions = 0; + size_t i; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *action = actions->actions[i]; + struct virtchnl_filter_action *filter_action = + &filter->sub_fltr.actions.actions[num_actions]; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { switch (action->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - vf_num++; - filter_action = &filter->sub_fltr.actions.actions[num]; - - act_ethdev = action->conf; - rule_port_id = ad->dev_data->port_id; - if (rule_port_id != act_ethdev->port_id) - goto error1; - - filter->sub_fltr.actions.count = ++num; - break; + /* nothing to be done, but skip the action */ + continue; case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - filter_action = &filter->sub_fltr.actions.actions[num]; - - act_q = action->conf; - if (act_q->index >= ad->dev_data->nb_rx_queues) - goto error2; - + { + const struct rte_flow_action_queue *act_q = action->conf; filter_action->type = VIRTCHNL_ACTION_QUEUE; filter_action->act_conf.queue.index = act_q->index; - filter->sub_fltr.actions.count = ++num; break; + } case RTE_FLOW_ACTION_TYPE_RSS: - dest_num++; - filter_action = &filter->sub_fltr.actions.actions[num]; - - act_qgrop = action->conf; - if (act_qgrop->queue_num <= 1) - goto error2; + { + const struct rte_flow_action_rss *act_qgrp = action->conf; filter_action->type = VIRTCHNL_ACTION_Q_REGION; - filter_action->act_conf.queue.index = - act_qgrop->queue[0]; - for (i = 0; i < MAX_QGRP_NUM_TYPE; i++) { - if (act_qgrop->queue_num == - valid_qgrop_number[i]) - break; - } - - if (i == MAX_QGRP_NUM_TYPE) - goto error2; - - if ((act_qgrop->queue[0] + act_qgrop->queue_num) > - ad->dev_data->nb_rx_queues) - goto error3; - - for (i = 0; i < act_qgrop->queue_num - 1; i++) - if (act_qgrop->queue[i + 1] != - act_qgrop->queue[i] + 1) - goto error4; - - filter_action->act_conf.queue.region = act_qgrop->queue_num; - filter->sub_fltr.actions.count = ++num; + filter_action->act_conf.queue.index = act_qgrp->queue[0]; + filter_action->act_conf.queue.region = act_qgrp->queue_num; break; + } default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid action type"); - return -rte_errno; + /* cannot happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type."); } + filter->sub_fltr.actions.count = ++num_actions; } /* 0 denotes lowest priority of recipe and highest priority @@ -630,91 +587,86 @@ iavf_fsub_parse_action(struct iavf_adapter *ad, */ filter->sub_fltr.priority = priority; - if (num > VIRTCHNL_MAX_NUM_ACTIONS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Action numbers exceed the maximum value"); - return -rte_errno; - } - - if (vf_num == 0) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action, vf action must be added"); - return -rte_errno; - } - - if (dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - return 0; - -error1: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid port id"); - return -rte_errno; - -error2: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action type or queue number"); - return -rte_errno; - -error3: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid queue region indexes"); - return -rte_errno; - -error4: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Discontinuous queue region"); - return -rte_errno; } static int -iavf_fsub_check_action(const struct rte_flow_action *actions, +iavf_fsub_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, struct rte_flow_error *error) { - const struct rte_flow_action *action; - enum rte_flow_action_type action_type; - uint16_t actions_num = 0; - bool vf_valid = false; - bool queue_valid = false; + const struct iavf_adapter *ad = param->driver_ctx; + bool vf = false; + bool queue = false; + size_t i; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { + /* + * allowed action types: + * 1. PORT_REPRESENTOR only + * 2. PORT_REPRESENTOR + QUEUE/RSS + */ + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *action = actions->actions[i]; + switch (action->type) { case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - vf_valid = true; - actions_num++; + { + const struct rte_flow_action_ethdev *act_ethdev = action->conf; + + if (act_ethdev->port_id != ad->dev_data->port_id) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_ethdev, + "Invalid port id"); + } + vf = true; break; + } case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *act_qgrp = action->conf; + + /* must be between 2 and 128 and be a power of 2 */ + if (act_qgrp->queue_num < 2 || act_qgrp->queue_num > 128 || + !rte_is_power_of_2(act_qgrp->queue_num)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_qgrp, + "Invalid number of queues in RSS queue group"); + } + /* last queue must not exceed total number of queues */ + if (act_qgrp->queue[0] + act_qgrp->queue_num > ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_qgrp, + "Invalid queue index in RSS queue group"); + } + + queue = true; + break; + } case RTE_FLOW_ACTION_TYPE_QUEUE: - queue_valid = true; - actions_num++; + { + const struct rte_flow_action_queue *act_q = action->conf; + + if (act_q->index >= ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q, + "Invalid queue index"); + } + + queue = true; break; - case RTE_FLOW_ACTION_TYPE_DROP: - actions_num++; - break; - case RTE_FLOW_ACTION_TYPE_VOID: - continue; + } default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid action type"); - return -rte_errno; + /* shouldn't happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type"); } } - - if (!((actions_num == 1 && !queue_valid) || - (actions_num == 2 && vf_valid && queue_valid))) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid action number"); - return -rte_errno; + /* QUEUE/RSS must be accompanied by PORT_REPRESENTOR */ + if (queue != vf) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Invalid action combination"); } return 0; @@ -730,6 +682,18 @@ iavf_fsub_parse(struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + .check = iavf_fsub_action_check, + .driver_ctx = ad, + }; struct iavf_fsub_conf *filter; struct iavf_pattern_match_item *pattern_match_item = NULL; struct ci_flow_attr_check_param attr_param = { @@ -749,6 +713,10 @@ iavf_fsub_parse(struct iavf_adapter *ad, if (ret) goto error; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + goto error; + if (attr->priority > 1) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, @@ -772,14 +740,8 @@ iavf_fsub_parse(struct iavf_adapter *ad, if (ret) goto error; - /* check flow subscribe pattern action */ - ret = iavf_fsub_check_action(actions, error); - if (ret) - goto error; - /* parse flow subscribe pattern action */ - ret = iavf_fsub_parse_action((void *)ad, actions, attr->priority, - error, filter); + ret = iavf_fsub_parse_action(&parsed_actions, attr->priority, error, filter); error: if (!ret && meta) diff --git a/drivers/net/intel/iavf/iavf_generic_flow.c b/drivers/net/intel/iavf/iavf_generic_flow.c index b8f6414b16..022caf5fe2 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.c +++ b/drivers/net/intel/iavf/iavf_generic_flow.c @@ -2153,13 +2153,6 @@ iavf_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - *engine = iavf_parse_engine(ad, flow, &vf->rss_parser_list, attr, pattern, actions, error); if (*engine) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 20/25] net/iavf: use common action checks for flow query 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (18 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 19/25] net/iavf: use common action checks for fsub Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 21/25] net/ice: use common flow attribute checks Anatoly Burakov ` (6 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow parsing infrastructure to validate query actions. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_generic_flow.c | 29 ++++++++++------------ 1 file changed, 13 insertions(+), 16 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_generic_flow.c b/drivers/net/intel/iavf/iavf_generic_flow.c index 022caf5fe2..9fc19576cd 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.c +++ b/drivers/net/intel/iavf/iavf_generic_flow.c @@ -17,6 +17,7 @@ #include "iavf.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" static struct iavf_engine_list engine_list = TAILQ_HEAD_INITIALIZER(engine_list); @@ -2332,7 +2333,14 @@ iavf_flow_query(struct rte_eth_dev *dev, void *data, struct rte_flow_error *error) { - int ret = -EINVAL; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_COUNT, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1 + }; struct iavf_adapter *ad = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct rte_flow_query_count *count = data; @@ -2344,19 +2352,8 @@ iavf_flow_query(struct rte_eth_dev *dev, return -rte_errno; } - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_COUNT: - ret = flow->engine->query_count(ad, flow, count, error); - break; - default: - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "action not supported"); - } - } - return ret; + if (ci_flow_check_actions(actions, ¶m, &parsed_actions, error) < 0) + return -rte_errno; + + return flow->engine->query_count(ad, flow, count, error); } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 21/25] net/ice: use common flow attribute checks 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (19 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 20/25] net/iavf: use common action checks for flow query Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 22/25] net/ice: use common action checks for hash Anatoly Burakov ` (5 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Replace custom attr checks with a call to common checks. Switch engine supports priority (0 or 1) but other engines don't, so we move the attribute checks into the engines. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_acl_filter.c | 9 ++-- drivers/net/intel/ice/ice_fdir_filter.c | 8 ++- drivers/net/intel/ice/ice_generic_flow.c | 59 +++-------------------- drivers/net/intel/ice/ice_generic_flow.h | 2 +- drivers/net/intel/ice/ice_hash.c | 11 +++-- drivers/net/intel/ice/ice_switch_filter.c | 22 +++++++-- 6 files changed, 48 insertions(+), 63 deletions(-) diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c index 6754a40044..0421578b32 100644 --- a/drivers/net/intel/ice/ice_acl_filter.c +++ b/drivers/net/intel/ice/ice_acl_filter.c @@ -27,6 +27,8 @@ #include "ice_generic_flow.h" #include "base/ice_flow.h" +#include "../common/flow_check.h" + #define MAX_ACL_SLOTS_ID 2048 #define ICE_ACL_INSET_ETH_IPV4 ( \ @@ -970,7 +972,7 @@ ice_acl_parse(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -980,8 +982,9 @@ ice_acl_parse(struct ice_adapter *ad, uint64_t input_set; int ret; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; memset(filter, 0, sizeof(*filter)); item = ice_search_pattern_match_item(ad, pattern, array, array_len, diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c index 62f1257e27..553b20307c 100644 --- a/drivers/net/intel/ice/ice_fdir_filter.c +++ b/drivers/net/intel/ice/ice_fdir_filter.c @@ -14,6 +14,8 @@ #include "ice_rxtx.h" #include "ice_generic_flow.h" +#include "../common/flow_check.h" + #define ICE_FDIR_IPV6_TC_OFFSET 20 #define ICE_IPV6_TC_MASK (0xFF << ICE_FDIR_IPV6_TC_OFFSET) @@ -2455,7 +2457,7 @@ ice_fdir_parse(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority __rte_unused, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -2466,6 +2468,10 @@ ice_fdir_parse(struct ice_adapter *ad, bool raw = false; int ret; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + memset(filter, 0, sizeof(*filter)); item = ice_search_pattern_match_item(ad, pattern, array, array_len, error); diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c index 3f7a9f4714..2e59aef374 100644 --- a/drivers/net/intel/ice/ice_generic_flow.c +++ b/drivers/net/intel/ice/ice_generic_flow.c @@ -16,6 +16,7 @@ #include <rte_malloc.h> #include <rte_tailq.h> +#include "../common/flow_check.h" #include "ice_ethdev.h" #include "ice_generic_flow.h" @@ -1799,7 +1800,7 @@ enum rte_flow_item_type pattern_eth_ipv6_pfcp[] = { typedef bool (*parse_engine_t)(struct ice_adapter *ad, struct rte_flow *flow, struct ice_flow_parser *parser, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); @@ -1885,44 +1886,6 @@ ice_flow_uninit(struct ice_adapter *ad) } } -static int -ice_flow_valid_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error) -{ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->transfer) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "Not support transfer."); - return -rte_errno; - } - - if (attr->priority > 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Only support priority 0 and 1."); - return -rte_errno; - } - - return 0; -} - /* Find the first VOID or non-VOID item pointer */ static const struct rte_flow_item * ice_find_first_item(const struct rte_flow_item *item, bool is_void) @@ -2183,7 +2146,7 @@ static bool ice_parse_engine_create(struct ice_adapter *ad, struct rte_flow *flow, struct ice_flow_parser *parser, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2201,7 +2164,7 @@ ice_parse_engine_create(struct ice_adapter *ad, if (parser->parse_pattern_action(ad, parser->array, parser->array_len, - pattern, actions, priority, &meta, error) < 0) + pattern, actions, attr, &meta, error) < 0) return false; RTE_ASSERT(parser->engine->create != NULL); @@ -2213,7 +2176,7 @@ static bool ice_parse_engine_validate(struct ice_adapter *ad, struct rte_flow *flow __rte_unused, struct ice_flow_parser *parser, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2230,7 +2193,7 @@ ice_parse_engine_validate(struct ice_adapter *ad, return parser->parse_pattern_action(ad, parser->array, parser->array_len, - pattern, actions, priority, + pattern, actions, attr, NULL, error) >= 0; } @@ -2258,7 +2221,6 @@ ice_flow_process_filter(struct rte_eth_dev *dev, parse_engine_t ice_parse_engine, struct rte_flow_error *error) { - int ret = ICE_ERR_NOT_SUPPORTED; struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ice_flow_parser *parser; @@ -2283,15 +2245,10 @@ ice_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - ret = ice_flow_valid_attr(attr, error); - if (ret) - return ret; - *engine = NULL; /* always try hash engine first */ if (ice_parse_engine(ad, flow, &ice_hash_parser, - attr->priority, pattern, - actions, error)) { + attr, pattern, actions, error)) { *engine = ice_hash_parser.engine; return 0; } @@ -2312,7 +2269,7 @@ ice_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (ice_parse_engine(ad, flow, parser, attr->priority, + if (ice_parse_engine(ad, flow, parser, attr, pattern, actions, error)) { *engine = parser->engine; return 0; diff --git a/drivers/net/intel/ice/ice_generic_flow.h b/drivers/net/intel/ice/ice_generic_flow.h index 54bbb47398..bae9f62f79 100644 --- a/drivers/net/intel/ice/ice_generic_flow.h +++ b/drivers/net/intel/ice/ice_generic_flow.h @@ -475,7 +475,7 @@ typedef int (*parse_pattern_action_t)(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c index 1174c505da..7b77890790 100644 --- a/drivers/net/intel/ice/ice_hash.c +++ b/drivers/net/intel/ice/ice_hash.c @@ -26,6 +26,8 @@ #include "ice_ethdev.h" #include "ice_generic_flow.h" +#include "../common/flow_check.h" + #define ICE_PHINT_NONE 0 #define ICE_PHINT_VLAN BIT_ULL(0) #define ICE_PHINT_PPPOE BIT_ULL(1) @@ -107,7 +109,7 @@ ice_hash_parse_pattern_action(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); @@ -1160,7 +1162,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1169,8 +1171,9 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, struct ice_rss_meta *rss_meta_ptr; uint64_t phint = ICE_PHINT_NONE; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c index b25e5eaad3..d8c0e7c59c 100644 --- a/drivers/net/intel/ice/ice_switch_filter.c +++ b/drivers/net/intel/ice/ice_switch_filter.c @@ -26,6 +26,7 @@ #include "ice_generic_flow.h" #include "ice_dcf_ethdev.h" +#include "../common/flow_check.h" #define MAX_QGRP_NUM_TYPE 7 #define MAX_INPUT_SET_BYTE 32 @@ -1768,7 +1769,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1784,6 +1785,21 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, enum ice_sw_tunnel_type tun_type = ICE_NON_TUN; struct ice_pattern_match_item *pattern_match_item = NULL; + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; + + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + return ret; + + /* Allow only two priority values - 0 or 1 */ + if (attr->priority > 1) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, NULL, + "Invalid priority for switch filter"); + return -rte_errno; + } for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { item_num++; @@ -1859,10 +1875,10 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, goto error; if (ad->hw.dcf_enabled) - ret = ice_switch_parse_dcf_action((void *)ad, actions, priority, + ret = ice_switch_parse_dcf_action((void *)ad, actions, attr->priority, error, &rule_info); else - ret = ice_switch_parse_action(pf, actions, priority, error, + ret = ice_switch_parse_action(pf, actions, attr->priority, error, &rule_info); if (ret) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 22/25] net/ice: use common action checks for hash 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (20 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 21/25] net/ice: use common flow attribute checks Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 23/25] net/ice: use common action checks for FDIR Anatoly Burakov ` (4 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for hash filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_hash.c | 178 +++++++++++++++++-------------- 1 file changed, 95 insertions(+), 83 deletions(-) diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c index 7b77890790..250d8f97d1 100644 --- a/drivers/net/intel/ice/ice_hash.c +++ b/drivers/net/intel/ice/ice_hash.c @@ -1065,94 +1065,92 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func, } static int -ice_hash_parse_action(struct ice_pattern_match_item *pattern_match_item, - const struct rte_flow_action actions[], +ice_hash_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss; + + rss = actions->actions[0]->conf; + + switch (rss->func) { + case RTE_ETH_HASH_FUNCTION_DEFAULT: + case RTE_ETH_HASH_FUNCTION_TOEPLITZ: + case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR: + case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ: + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Selected RSS hash function not supported"); + } + + if (rss->level) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "a nonzero RSS encapsulation level is not supported"); + + if (rss->key_len) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "a nonzero RSS key_len is not supported"); + + if (rss->queue) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "a non-NULL RSS queue is not supported"); + + return 0; +} + +static int +ice_hash_parse_rss_action(struct ice_pattern_match_item *pattern_match_item, + const struct rte_flow_action_rss *rss, uint64_t pattern_hint, struct ice_rss_meta *rss_meta, struct rte_flow_error *error) { struct ice_rss_hash_cfg *cfg = pattern_match_item->meta; - enum rte_flow_action_type action_type; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action; uint64_t rss_type; + bool symm = false; - /* Supported action is RSS. */ - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - rss = action->conf; - rss_type = rss->types; - - /* Check hash function and save it to rss_meta. */ - if (pattern_match_item->pattern_list != - pattern_empty && rss->func == - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) { - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "Not supported flow"); - } else if (rss->func == - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR){ - rss_meta->hash_function = - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR; - return 0; - } else if (rss->func == - RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { - rss_meta->hash_function = - RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ; - if (pattern_hint == ICE_PHINT_RAW) - rss_meta->raw.symm = true; - else - cfg->symm = true; - } - - if (rss->level) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS encapsulation level is not supported"); - - if (rss->key_len) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS key_len is not supported"); - - if (rss->queue) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a non-NULL RSS queue is not supported"); - - /* If pattern type is raw, no need to refine rss type */ - if (pattern_hint == ICE_PHINT_RAW) - break; - - /** - * Check simultaneous use of SRC_ONLY and DST_ONLY - * of the same level. - */ - rss_type = rte_eth_rss_hf_refine(rss_type); - - if (ice_any_invalid_rss_type(rss->func, rss_type, - pattern_match_item->input_set_mask_o)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - action, "RSS type not supported"); - - rss_meta->cfg = *cfg; - ice_refine_hash_cfg(&rss_meta->cfg, - rss_type, pattern_hint); - break; - case RTE_FLOW_ACTION_TYPE_END: - break; - - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "Invalid action."); - return -rte_errno; + if (rss->func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) { + if (pattern_match_item->pattern_list != pattern_empty) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "XOR hash function is only supported for empty pattern"); } + rss_meta->hash_function = RTE_ETH_HASH_FUNCTION_SIMPLE_XOR; + return 0; } + if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { + rss_meta->hash_function = RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ; + symm = true; + } + + /* If pattern type is raw, no need to refine rss type */ + if (pattern_hint == ICE_PHINT_RAW) { + rss_meta->raw.symm = symm; + return 0; + } + cfg->symm = symm; + + /** + * Check simultaneous use of SRC_ONLY and DST_ONLY + * of the same level. + */ + rss_type = rte_eth_rss_hf_refine(rss->types); + + if (ice_any_invalid_rss_type(rss->func, rss_type, + pattern_match_item->input_set_mask_o)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss, "RSS type not supported"); + + rss_meta->cfg = *cfg; + ice_refine_hash_cfg(&rss_meta->cfg, + rss_type, pattern_hint); + return 0; } @@ -1166,15 +1164,29 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, void **meta, struct rte_flow_error *error) { - int ret = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_hash_parse_action_check, + }; + const struct rte_flow_action_rss *rss; struct ice_pattern_match_item *pattern_match_item; struct ice_rss_meta *rss_meta_ptr; uint64_t phint = ICE_PHINT_NONE; + int ret = 0; ret = ci_flow_check_attr(attr, NULL, error); if (ret) return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { rte_flow_error_set(error, EINVAL, @@ -1206,9 +1218,9 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, } } - /* Check rss action. */ - ret = ice_hash_parse_action(pattern_match_item, actions, phint, - rss_meta_ptr, error); + rss = parsed_actions.actions[0]->conf; + ret = ice_hash_parse_rss_action(pattern_match_item, rss, phint, + rss_meta_ptr, error); error: if (!ret && meta) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 23/25] net/ice: use common action checks for FDIR 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (21 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 22/25] net/ice: use common action checks for hash Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 24/25] net/ice: use common action checks for switch Anatoly Burakov ` (3 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for FDIR filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_fdir_filter.c | 376 +++++++++++++----------- 1 file changed, 204 insertions(+), 172 deletions(-) diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c index 553b20307c..a204aa785b 100644 --- a/drivers/net/intel/ice/ice_fdir_filter.c +++ b/drivers/net/intel/ice/ice_fdir_filter.c @@ -1604,177 +1604,6 @@ static struct ice_flow_engine ice_fdir_engine = { .type = ICE_FLOW_ENGINE_FDIR, }; -static int -ice_fdir_parse_action_qregion(struct ice_pf *pf, - struct rte_flow_error *error, - const struct rte_flow_action *act, - struct ice_fdir_filter_conf *filter) -{ - const struct rte_flow_action_rss *rss = act->conf; - uint32_t i; - - if (act->type != RTE_FLOW_ACTION_TYPE_RSS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; - } - - if (rss->queue_num <= 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Queue region size can't be 0 or 1."); - return -rte_errno; - } - - /* check if queue index for queue region is continuous */ - for (i = 0; i < rss->queue_num - 1; i++) { - if (rss->queue[i + 1] != rss->queue[i] + 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Discontinuous queue region"); - return -rte_errno; - } - } - - if (rss->queue[rss->queue_num - 1] >= pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid queue region indexes."); - return -rte_errno; - } - - if (!(rte_is_power_of_2(rss->queue_num) && - (rss->queue_num <= ICE_FDIR_MAX_QREGION_SIZE))) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "The region size should be any of the following values:" - "1, 2, 4, 8, 16, 32, 64, 128 as long as the total number " - "of queues do not exceed the VSI allocation."); - return -rte_errno; - } - - filter->input.q_index = rss->queue[0]; - filter->input.q_region = rte_fls_u32(rss->queue_num) - 1; - filter->input.dest_ctl = ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QGROUP; - - return 0; -} - -static int -ice_fdir_parse_action(struct ice_adapter *ad, - const struct rte_flow_action actions[], - struct rte_flow_error *error, - struct ice_fdir_filter_conf *filter) -{ - struct ice_pf *pf = &ad->pf; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark_spec = NULL; - const struct rte_flow_action_count *act_count; - uint32_t dest_num = 0; - uint32_t mark_num = 0; - uint32_t counter_num = 0; - int ret; - - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - - act_q = actions->conf; - filter->input.q_index = act_q->index; - if (filter->input.q_index >= - pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid queue for FDIR."); - return -rte_errno; - } - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; - break; - case RTE_FLOW_ACTION_TYPE_DROP: - dest_num++; - - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; - break; - case RTE_FLOW_ACTION_TYPE_PASSTHRU: - dest_num++; - - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; - break; - case RTE_FLOW_ACTION_TYPE_RSS: - dest_num++; - - ret = ice_fdir_parse_action_qregion(pf, - error, actions, filter); - if (ret) - return ret; - break; - case RTE_FLOW_ACTION_TYPE_MARK: - mark_num++; - filter->mark_flag = 1; - mark_spec = actions->conf; - filter->input.fltr_id = mark_spec->id; - filter->input.fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ONE; - break; - case RTE_FLOW_ACTION_TYPE_COUNT: - counter_num++; - - act_count = actions->conf; - filter->input.cnt_ena = ICE_FXD_FLTR_QW0_STAT_ENA_PKTS; - rte_memcpy(&filter->act_count, act_count, - sizeof(filter->act_count)); - - break; - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action."); - return -rte_errno; - } - } - - if (dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - - if (mark_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Too many mark actions"); - return -rte_errno; - } - - if (counter_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Too many count actions"); - return -rte_errno; - } - - if (dest_num + mark_num + counter_num == 0) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Empty action"); - return -rte_errno; - } - - /* set default action to PASSTHRU mode, in "mark/count only" case. */ - if (dest_num == 0) - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; - - return 0; -} static int ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, @@ -2451,6 +2280,189 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, return 0; } +static int +ice_fdir_parse_action(struct ice_adapter *ad, + const struct ci_flow_actions *actions, + struct rte_flow_error *error) +{ + struct ice_pf *pf = &ad->pf; + struct ice_fdir_filter_conf *filter = &pf->fdir.conf; + bool dest_set = false; + size_t i; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *act = actions->actions[i]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + dest_set = true; + + filter->input.q_index = act_q->index; + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; + break; + } + case RTE_FLOW_ACTION_TYPE_DROP: + dest_set = true; + + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; + break; + case RTE_FLOW_ACTION_TYPE_PASSTHRU: + dest_set = true; + + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; + break; + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = act->conf; + dest_set = true; + + filter->input.q_index = rss->queue[0]; + filter->input.q_region = rte_fls_u32(rss->queue_num) - 1; + filter->input.dest_ctl = ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QGROUP; + + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + const struct rte_flow_action_mark *mark_spec = act->conf; + filter->mark_flag = 1; + filter->input.fltr_id = mark_spec->id; + filter->input.fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ONE; + break; + } + case RTE_FLOW_ACTION_TYPE_COUNT: + { + const struct rte_flow_action_count *act_count = act->conf; + + filter->input.cnt_ena = ICE_FXD_FLTR_QW0_STAT_ENA_PKTS; + rte_memcpy(&filter->act_count, act_count, + sizeof(filter->act_count)); + break; + } + default: + /* Should not happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + } + + /* set default action to PASSTHRU mode, in "mark/count only" case. */ + if (!dest_set) { + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; + } + + return 0; +} + +static int +ice_fdir_check_action_qregion(struct ice_pf *pf, + struct rte_flow_error *error, + const struct rte_flow_action_rss *rss) +{ + if (rss->queue_num <= 1) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Queue region size can't be 0 or 1."); + } + + if (rss->queue[rss->queue_num - 1] >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Invalid queue region indexes."); + } + + if (!(rte_is_power_of_2(rss->queue_num) && + (rss->queue_num <= ICE_FDIR_MAX_QREGION_SIZE))) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "The region size should be any of the following values:" + "2, 4, 8, 16, 32, 64, 128 as long as the total number " + "of queues do not exceed the VSI allocation."); + } + + return 0; +} + +static int +ice_fdir_check_action(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct ice_adapter *ad = param->driver_ctx; + struct ice_pf *pf = &ad->pf; + uint32_t dest_num = 0; + uint32_t mark_num = 0; + uint32_t counter_num = 0; + size_t i; + int ret; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *act = actions->actions[i]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + dest_num++; + + if (act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid queue for FDIR."); + } + break; + } + case RTE_FLOW_ACTION_TYPE_DROP: + dest_num++; + break; + case RTE_FLOW_ACTION_TYPE_PASSTHRU: + dest_num++; + break; + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = act->conf; + + dest_num++; + ret = ice_fdir_check_action_qregion(pf, error, rss); + if (ret) + return ret; + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + mark_num++; + break; + } + case RTE_FLOW_ACTION_TYPE_COUNT: + { + counter_num++; + break; + } + default: + /* Should not happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + } + + if (dest_num > 1 || mark_num > 1 || counter_num > 1) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Unsupported action combination"); + } + + return 0; +} + static int ice_fdir_parse(struct ice_adapter *ad, struct ice_pattern_match_item *array, @@ -2461,6 +2473,21 @@ ice_fdir_parse(struct ice_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_PASSTHRU, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_COUNT, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 3, + .check = ice_fdir_check_action, + .driver_ctx = ad, + }; struct ice_pf *pf = &ad->pf; struct ice_fdir_filter_conf *filter = &pf->fdir.conf; struct ice_pattern_match_item *item = NULL; @@ -2473,6 +2500,11 @@ ice_fdir_parse(struct ice_adapter *ad, return ret; memset(filter, 0, sizeof(*filter)); + + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + item = ice_search_pattern_match_item(ad, pattern, array, array_len, error); @@ -2500,7 +2532,7 @@ ice_fdir_parse(struct ice_adapter *ad, goto error; } - ret = ice_fdir_parse_action(ad, actions, error, filter); + ret = ice_fdir_parse_action(ad, &parsed_actions, error); if (ret) goto error; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 24/25] net/ice: use common action checks for switch 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (22 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 23/25] net/ice: use common action checks for FDIR Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 25/25] net/ice: use common action checks for ACL Anatoly Burakov ` (2 subsequent siblings) 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for switch filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_switch_filter.c | 370 +++++++++++----------- 1 file changed, 184 insertions(+), 186 deletions(-) diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c index d8c0e7c59c..9a46e3b413 100644 --- a/drivers/net/intel/ice/ice_switch_filter.c +++ b/drivers/net/intel/ice/ice_switch_filter.c @@ -35,6 +35,8 @@ #define ICE_IPV4_PROTO_NVGRE 0x002F #define ICE_SW_PRI_BASE 6 +#define ICE_SW_MAX_QUEUES 128 + #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) #define ICE_SW_INSET_MAC_VLAN ( \ @@ -1527,85 +1529,38 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], } static int -ice_switch_parse_dcf_action(struct ice_dcf_adapter *ad, - const struct rte_flow_action *actions, +ice_switch_parse_dcf_action(const struct rte_flow_action *action, uint32_t priority, struct rte_flow_error *error, struct ice_adv_rule_info *rule_info) { const struct rte_flow_action_ethdev *act_ethdev; - const struct rte_flow_action *action; const struct rte_eth_dev *repr_dev; enum rte_flow_action_type action_type; - uint16_t rule_port_id, backer_port_id; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; - act_ethdev = action->conf; - - if (!rte_eth_dev_is_valid_port(act_ethdev->port_id)) - goto invalid_port_id; - - /* For traffic to original DCF port */ - rule_port_id = ad->parent.pf.dev_data->port_id; - - if (rule_port_id != act_ethdev->port_id) - goto invalid_port_id; - - rule_info->sw_act.vsi_handle = 0; - - break; - -invalid_port_id: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid port_id"); - return -rte_errno; - - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; - act_ethdev = action->conf; - - if (!rte_eth_dev_is_valid_port(act_ethdev->port_id)) - goto invalid; - - repr_dev = &rte_eth_devices[act_ethdev->port_id]; - - if (!repr_dev->data) - goto invalid; - - rule_port_id = ad->parent.pf.dev_data->port_id; - backer_port_id = repr_dev->data->backer_port_id; - - if (backer_port_id != rule_port_id) - goto invalid; - - rule_info->sw_act.vsi_handle = repr_dev->data->representor_id; - break; - -invalid: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid ethdev_port_id"); - return -rte_errno; - - case RTE_FLOW_ACTION_TYPE_DROP: - rule_info->sw_act.fltr_act = ICE_DROP_PACKET; - break; - - default: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action type"); - return -rte_errno; - } + action_type = action->type; + switch (action_type) { + case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: + rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; + rule_info->sw_act.vsi_handle = 0; + break; + + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; + act_ethdev = action->conf; + repr_dev = &rte_eth_devices[act_ethdev->port_id]; + rule_info->sw_act.vsi_handle = repr_dev->data->representor_id; + break; + + case RTE_FLOW_ACTION_TYPE_DROP: + rule_info->sw_act.fltr_act = ICE_DROP_PACKET; + break; + + default: + /* Should never reach */ + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Invalid action type"); + return -rte_errno; } rule_info->sw_act.src = rule_info->sw_act.vsi_handle; @@ -1621,73 +1576,38 @@ ice_switch_parse_dcf_action(struct ice_dcf_adapter *ad, static int ice_switch_parse_action(struct ice_pf *pf, - const struct rte_flow_action *actions, + const struct rte_flow_action *action, uint32_t priority, struct rte_flow_error *error, struct ice_adv_rule_info *rule_info) { struct ice_vsi *vsi = pf->main_vsi; - struct rte_eth_dev_data *dev_data = pf->adapter->pf.dev_data; const struct rte_flow_action_queue *act_q; const struct rte_flow_action_rss *act_qgrop; - uint16_t base_queue, i; - const struct rte_flow_action *action; + uint16_t base_queue; enum rte_flow_action_type action_type; - uint16_t valid_qgrop_number[MAX_QGRP_NUM_TYPE] = { - 2, 4, 8, 16, 32, 64, 128}; base_queue = pf->base_queue + vsi->base_queue; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - act_qgrop = action->conf; - if (act_qgrop->queue_num <= 1) - goto error; - rule_info->sw_act.fltr_act = - ICE_FWD_TO_QGRP; - rule_info->sw_act.fwd_id.q_id = - base_queue + act_qgrop->queue[0]; - for (i = 0; i < MAX_QGRP_NUM_TYPE; i++) { - if (act_qgrop->queue_num == - valid_qgrop_number[i]) - break; - } - if (i == MAX_QGRP_NUM_TYPE) - goto error; - if ((act_qgrop->queue[0] + - act_qgrop->queue_num) > - dev_data->nb_rx_queues) - goto error1; - for (i = 0; i < act_qgrop->queue_num - 1; i++) - if (act_qgrop->queue[i + 1] != - act_qgrop->queue[i] + 1) - goto error2; - rule_info->sw_act.qgrp_size = - act_qgrop->queue_num; - break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - act_q = action->conf; - if (act_q->index >= dev_data->nb_rx_queues) - goto error; - rule_info->sw_act.fltr_act = - ICE_FWD_TO_Q; - rule_info->sw_act.fwd_id.q_id = - base_queue + act_q->index; - break; - - case RTE_FLOW_ACTION_TYPE_DROP: - rule_info->sw_act.fltr_act = - ICE_DROP_PACKET; - break; - - case RTE_FLOW_ACTION_TYPE_VOID: - break; - - default: - goto error; - } + action_type = action->type; + switch (action_type) { + case RTE_FLOW_ACTION_TYPE_RSS: + act_qgrop = action->conf; + rule_info->sw_act.fltr_act = ICE_FWD_TO_QGRP; + rule_info->sw_act.fwd_id.q_id = base_queue + act_qgrop->queue[0]; + rule_info->sw_act.qgrp_size = act_qgrop->queue_num; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + act_q = action->conf; + rule_info->sw_act.fltr_act = ICE_FWD_TO_Q; + rule_info->sw_act.fwd_id.q_id = base_queue + act_q->index; + break; + case RTE_FLOW_ACTION_TYPE_DROP: + rule_info->sw_act.fltr_act = ICE_DROP_PACKET; + break; + default: + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Invalid action type or queue number"); + return -rte_errno; } rule_info->sw_act.vsi_handle = vsi->idx; @@ -1699,65 +1619,120 @@ ice_switch_parse_action(struct ice_pf *pf, rule_info->priority = ICE_SW_PRI_BASE - priority; return 0; - -error: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action type or queue number"); - return -rte_errno; - -error1: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid queue region indexes"); - return -rte_errno; - -error2: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Discontinuous queue region"); - return -rte_errno; } static int -ice_switch_check_action(const struct rte_flow_action *actions, - struct rte_flow_error *error) +ice_switch_dcf_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) { + struct ice_dcf_adapter *ad = param->driver_ctx; const struct rte_flow_action *action; enum rte_flow_action_type action_type; - uint16_t actions_num = 0; - - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - case RTE_FLOW_ACTION_TYPE_QUEUE: - case RTE_FLOW_ACTION_TYPE_DROP: - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - actions_num++; - break; - case RTE_FLOW_ACTION_TYPE_VOID: - continue; - default: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action type"); - return -rte_errno; + const struct rte_flow_action_ethdev *act_ethdev; + const struct rte_eth_dev *repr_dev; + + action = actions->actions[0]; + action_type = action->type; + + switch (action_type) { + case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + { + uint16_t expected_port_id, backer_port_id; + act_ethdev = action->conf; + + if (!rte_eth_dev_is_valid_port(act_ethdev->port_id)) + goto invalid_port_id; + + expected_port_id = ad->parent.pf.dev_data->port_id; + + if (action_type == RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR) { + if (expected_port_id != act_ethdev->port_id) + goto invalid_port_id; + } else { + repr_dev = &rte_eth_devices[act_ethdev->port_id]; + + if (!repr_dev->data) + goto invalid_port_id; + + backer_port_id = repr_dev->data->backer_port_id; + + if (backer_port_id != expected_port_id) + goto invalid_port_id; } + + break; +invalid_port_id: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid port ID"); + } + case RTE_FLOW_ACTION_TYPE_DROP: + break; + default: + /* Should never reach */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type"); } - if (actions_num != 1) { - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action number"); - return -rte_errno; + return 0; +} + +static int +ice_switch_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct ice_adapter *ad = param->driver_ctx; + struct ice_pf *pf = &ad->pf; + struct rte_eth_dev_data *dev_data = pf->dev_data; + const struct rte_flow_action *action = actions->actions[0]; + + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *act_qgrop; + act_qgrop = action->conf; + + /* Check bounds on number of queues */ + if (act_qgrop->queue_num < 2 || act_qgrop->queue_num > ICE_SW_MAX_QUEUES) + goto err_rss; + + /* must be power of 2 */ + if (!rte_is_power_of_2(act_qgrop->queue_num)) + goto err_rss; + + /* queues are monotonous and contiguous so check last queue */ + if ((act_qgrop->queue[0] + act_qgrop->queue_num) > dev_data->nb_rx_queues) + goto err_rss; + + break; +err_rss: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid queue region"); + } + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q; + act_q = action->conf; + if (act_q->index >= dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid queue"); + } + + break; + } + case RTE_FLOW_ACTION_TYPE_DROP: + break; + default: + /* Should never reach */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type"); } return 0; @@ -1788,11 +1763,38 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, struct ci_flow_attr_check_param attr_param = { .allow_priority = true, }; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param dcf_param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, + RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_switch_dcf_action_check, + }; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_switch_action_check, + .driver_ctx = ad, + }; ret = ci_flow_check_attr(attr, &attr_param, error); if (ret) return ret; + ret = ci_flow_check_actions(actions, (ad->hw.dcf_enabled) ? &dcf_param : ¶m, + &parsed_actions, error); + if (ret) + goto error; + /* Allow only two priority values - 0 or 1 */ if (attr->priority > 1) { rte_flow_error_set(error, EINVAL, @@ -1870,16 +1872,12 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, memset(&rule_info, 0, sizeof(rule_info)); rule_info.tun_type = tun_type; - ret = ice_switch_check_action(actions, error); - if (ret) - goto error; - if (ad->hw.dcf_enabled) - ret = ice_switch_parse_dcf_action((void *)ad, actions, attr->priority, - error, &rule_info); + ret = ice_switch_parse_dcf_action(parsed_actions.actions[0], + attr->priority, error, &rule_info); else - ret = ice_switch_parse_action(pf, actions, attr->priority, error, - &rule_info); + ret = ice_switch_parse_action(pf, parsed_actions.actions[0], + attr->priority, error, &rule_info); if (ret) goto error; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v1 25/25] net/ice: use common action checks for ACL 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (23 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 24/25] net/ice: use common action checks for switch Anatoly Burakov @ 2026-02-11 14:20 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov 26 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-02-11 14:20 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for ACL filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_acl_filter.c | 143 +++++++++++++++---------- 1 file changed, 84 insertions(+), 59 deletions(-) diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c index 0421578b32..90fd3c2c05 100644 --- a/drivers/net/intel/ice/ice_acl_filter.c +++ b/drivers/net/intel/ice/ice_acl_filter.c @@ -645,60 +645,6 @@ ice_acl_filter_free(struct rte_flow *flow) flow->rule = NULL; } -static int -ice_acl_parse_action(__rte_unused struct ice_adapter *ad, - const struct rte_flow_action actions[], - struct rte_flow_error *error, - struct ice_acl_conf *filter) -{ - struct ice_pf *pf = &ad->pf; - const struct rte_flow_action_queue *act_q; - uint32_t dest_num = 0; - - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_DROP: - dest_num++; - - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; - break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - - act_q = actions->conf; - filter->input.q_index = act_q->index; - if (filter->input.q_index >= - pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid queue for FDIR."); - return -rte_errno; - } - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; - break; - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action."); - return -rte_errno; - } - } - - if (dest_num == 0 || dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - - return 0; -} - static int ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad, const struct rte_flow_item pattern[], @@ -966,6 +912,69 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad, return 0; } +static int +ice_acl_parse_action(const struct ci_flow_actions *actions, + struct ice_acl_conf *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_action *act = actions->actions[0]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_DROP: + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + + filter->input.q_index = act_q->index; + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; + break; + } + default: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + + return 0; +} + +static int +ice_acl_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct ice_adapter *ad = param->driver_ctx; + struct ice_pf *pf = &ad->pf; + const struct rte_flow_action *act = actions->actions[0]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_DROP: + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + + if (act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid queue for ACL."); + } + break; + } + default: + /* shouldn't happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + + return 0; +} + static int ice_acl_parse(struct ice_adapter *ad, struct ice_pattern_match_item *array, @@ -976,17 +985,33 @@ ice_acl_parse(struct ice_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_acl_parse_action_check, + .driver_ctx = ad, + }; struct ice_pf *pf = &ad->pf; struct ice_acl_conf *filter = &pf->acl.conf; struct ice_pattern_match_item *item = NULL; uint64_t input_set; int ret; - ret = ci_flow_check_attr(attr, NULL, error); - if (ret) - return ret; - memset(filter, 0, sizeof(*filter)); + + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + item = ice_search_pattern_match_item(ad, pattern, array, array_len, error); if (!item) @@ -1005,7 +1030,7 @@ ice_acl_parse(struct ice_adapter *ad, goto error; } - ret = ice_acl_parse_action(ad, actions, error, filter); + ret = ice_acl_parse_action(&parsed_actions, filter, error); if (ret) goto error; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (24 preceding siblings ...) 2026-02-11 14:20 ` [PATCH v1 25/25] net/ice: use common action checks for ACL Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 01/25] net/intel/common: add common flow action parsing Anatoly Burakov ` (25 more replies) 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov 26 siblings, 26 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev This patchset introduces common flow attr/action checking infrastructure to some Intel PMD's (IXGBE, I40E, IAVF, and ICE). The aim is to reduce code duplication, simplify implementation of new parsers/verification of existing ones, and make action/attr handling more consistent across drivers. v2: - Rebase on latest main - Now depends on series 37585 [1] [1] https://patches.dpdk.org/project/dpdk/list/?series=37585 Anatoly Burakov (20): net/intel/common: add common flow action parsing net/intel/common: add common flow attr validation net/ixgbe: use common checks in ethertype filter net/ixgbe: use common checks in syn filter net/ixgbe: use common checks in L2 tunnel filter net/ixgbe: use common checks in ntuple filter net/ixgbe: use common checks in security filter net/ixgbe: use common checks in FDIR filters net/ixgbe: use common checks in RSS filter net/i40e: use common flow attribute checks net/i40e: refactor RSS flow parameter checks net/i40e: use common action checks for ethertype net/i40e: use common action checks for FDIR net/i40e: use common action checks for tunnel net/iavf: use common flow attribute checks net/iavf: use common action checks for IPsec net/iavf: use common action checks for hash net/iavf: use common action checks for FDIR net/iavf: use common action checks for fsub net/iavf: use common action checks for flow query Vladimir Medvedkin (5): net/ice: use common flow attribute checks net/ice: use common action checks for hash net/ice: use common action checks for FDIR net/ice: use common action checks for switch net/ice: use common action checks for ACL drivers/net/intel/common/flow_check.h | 307 ++++++ drivers/net/intel/i40e/i40e_ethdev.h | 1 - drivers/net/intel/i40e/i40e_flow.c | 434 +++----- drivers/net/intel/i40e/i40e_hash.c | 425 +++++--- drivers/net/intel/i40e/i40e_hash.h | 2 +- drivers/net/intel/iavf/iavf_fdir.c | 367 +++---- drivers/net/intel/iavf/iavf_fsub.c | 266 +++-- drivers/net/intel/iavf/iavf_generic_flow.c | 103 +- drivers/net/intel/iavf/iavf_generic_flow.h | 2 +- drivers/net/intel/iavf/iavf_hash.c | 153 +-- drivers/net/intel/iavf/iavf_ipsec_crypto.c | 42 +- drivers/net/intel/ice/ice_acl_filter.c | 146 ++- drivers/net/intel/ice/ice_fdir_filter.c | 383 ++++--- drivers/net/intel/ice/ice_generic_flow.c | 59 +- drivers/net/intel/ice/ice_generic_flow.h | 2 +- drivers/net/intel/ice/ice_hash.c | 189 ++-- drivers/net/intel/ice/ice_switch_filter.c | 388 +++---- drivers/net/intel/ixgbe/ixgbe_flow.c | 1135 +++++++------------- 18 files changed, 2151 insertions(+), 2253 deletions(-) create mode 100644 drivers/net/intel/common/flow_check.h -- 2.47.3 ^ permalink raw reply [flat|nested] 83+ messages in thread
* [PATCH v2 01/25] net/intel/common: add common flow action parsing 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 02/25] net/intel/common: add common flow attr validation Anatoly Burakov ` (24 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson Currently, each driver has their own code for action parsing, which results in a lot of duplication and subtle mismatches in behavior between drivers. Add common infrastructure, based on the following assumptions: - All drivers support at most 4 actions at once, but usually less - Not every action is supported by all drivers - We can check a few common things to filter out obviously wrong actions - Driver performs semantic checks on all valid actions So, the intention is to reject everything we can reasonably reject at the outset without knowing anything about the drivers, parametrize what is trivial to parametrize, and leave the rest for the driver to implement. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- Depends-on: series-37585 ("Reduce reliance on global response buffer in IAVF") drivers/net/intel/common/flow_check.h | 237 ++++++++++++++++++++++++++ 1 file changed, 237 insertions(+) create mode 100644 drivers/net/intel/common/flow_check.h diff --git a/drivers/net/intel/common/flow_check.h b/drivers/net/intel/common/flow_check.h new file mode 100644 index 0000000000..69b1b60ffc --- /dev/null +++ b/drivers/net/intel/common/flow_check.h @@ -0,0 +1,237 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2025 Intel Corporation + */ + +#ifndef _COMMON_INTEL_FLOW_CHECK_H_ +#define _COMMON_INTEL_FLOW_CHECK_H_ + +#include <bus_pci_driver.h> +#include <ethdev_driver.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/* + * Common attr and action validation code for Intel drivers. + */ + +/** + * Maximum number of actions that can be stored in a parsed action list. + */ +#define CI_FLOW_PARSED_ACTIONS_MAX 4 + +/* Actions that are reasonably expected to have a conf structure */ +static const enum rte_flow_action_type need_conf[] = { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_VF, + RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR, + RTE_FLOW_ACTION_TYPE_COUNT, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_SECURITY, + RTE_FLOW_ACTION_TYPE_END +}; + +/** + * Is action type in this list of action types? + */ +__rte_internal +static inline bool +ci_flow_action_type_in_list(const enum rte_flow_action_type type, + const enum rte_flow_action_type list[]) +{ + size_t i = 0; + while (list[i] != RTE_FLOW_ACTION_TYPE_END) { + if (type == list[i]) + return true; + i++; + } + return false; +} + +/* Forward declarations */ +struct ci_flow_actions; +struct ci_flow_actions_check_param; + +/** + * Driver-specific action list validation callback. + * + * Performs driver-specific validation of action parameter list. + * Called after all actions have been parsed and added to the list, + * allowing validation based on the complete action set. + * + * @param actions + * The complete list of parsed actions (for context-dependent validation). + * @param driver_ctx + * Opaque driver context (e.g., adapter/queue configuration). + * @param error + * Pointer to rte_flow_error for reporting failures. + * @return + * 0 on success, negative errno on failure. + */ +typedef int (*ci_flow_actions_check_fn)(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error); + +/** + * List of actions that we know we've validated. + */ +struct ci_flow_actions { + /* Number of actions in the list. */ + uint8_t count; + /* Parsed actions array. */ + struct rte_flow_action const *actions[CI_FLOW_PARSED_ACTIONS_MAX]; +}; + +/** + * Parameters for action list validation.Any element can be NULL/0 as checks are only performed + * against constraints specified. + */ +struct ci_flow_actions_check_param { + /** + * Driver-specific context pointer (e.g., adapter/queue configuration). Can be NULL. + */ + void *driver_ctx; + /** + * Driver-specific action list validation callback. Can be NULL. + */ + ci_flow_actions_check_fn check; + /** + * Allowed action types for this parse parameter. Must be terminated with + * RTE_FLOW_ACTION_TYPE_END. Can be NULL. + */ + const enum rte_flow_action_type *allowed_types; + size_t max_actions; /**< Maximum number of actions allowed. */ +}; + +static inline int +__flow_action_check_rss(const struct rte_flow_action_rss *rss, struct rte_flow_error *error) +{ + size_t q, q_num = rss->queue_num; + if ((q_num == 0) == (rss->queue == NULL)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "If queue number is specified, queue array must also be specified"); + } + for (q = 1; q < q_num; q++) { + uint16_t qi = rss->queue[q]; + if (rss->queue[q - 1] + 1 != qi) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS queues must be contiguous"); + } + } + /* either we have both key and key length, or we have neither */ + if ((rss->key_len == 0) != (rss->key == NULL)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "If RSS key is specified, key length must also be specified"); + } + return 0; +} + +static inline int +__flow_action_check_generic(const struct rte_flow_action *action, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + /* is this action in our allowed list? */ + if (param != NULL && param->allowed_types != NULL && + !ci_flow_action_type_in_list(action->type, param->allowed_types)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Unsupported action"); + } + /* do we need to validate presence of conf? */ + if (ci_flow_action_type_in_list(action->type, need_conf)) { + if (action->conf == NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, action, + "Action requires configuration"); + } + } + + /* type-specific validation */ + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = + (const struct rte_flow_action_rss *)action->conf; + if (__flow_action_check_rss(rss, error) < 0) + return -rte_errno; + break; + } + default: + /* no specific validation */ + break; + } + + return 0; +} + +/** + * Validate and parse a list of rte_flow_action into a parsed action list. + * + * @param actions pointer to array of rte_flow_action, terminated by RTE_FLOW_ACTION_TYPE_END + * @param param pointer to ci_flow_actions_check_param structure (can be NULL) + * @param parsed_actions pointer to ci_flow_actions structure to store parsed actions + * @param error pointer to rte_flow_error structure for error reporting + * + * @return 0 on success, negative errno on failure. + */ +__rte_internal +static inline int +ci_flow_check_actions(const struct rte_flow_action *actions, + const struct ci_flow_actions_check_param *param, + struct ci_flow_actions *parsed_actions, + struct rte_flow_error *error) +{ + size_t i = 0; + + if (actions == NULL) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Missing actions"); + } + + /* reset the list */ + *parsed_actions = (struct ci_flow_actions){0}; + + while (actions[i].type != RTE_FLOW_ACTION_TYPE_END) { + const struct rte_flow_action *action = &actions[i++]; + + /* skip VOID actions */ + if (action->type == RTE_FLOW_ACTION_TYPE_VOID) + continue; + + /* generic validation for actions - this will check against param as well */ + if (__flow_action_check_generic(action, param, error) < 0) + return -rte_errno; + + /* add action to the list */ + if (parsed_actions->count >= RTE_DIM(parsed_actions->actions)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Too many actions"); + } + /* user may have specified a maximum number of actions */ + if (param != NULL && param->max_actions != 0 && + parsed_actions->count >= param->max_actions) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Too many actions"); + } + parsed_actions->actions[parsed_actions->count++] = action; + } + + /* now, call into user validation if specified */ + if (param != NULL && param->check != NULL) { + if (param->check(parsed_actions, param, error) < 0) + return -rte_errno; + } + /* if we didn't parse anything, valid action list is empty */ + return parsed_actions->count == 0 ? -EINVAL : 0; +} + +#ifdef __cplusplus +} +#endif + +#endif /* _COMMON_INTEL_FLOW_CHECK_H_ */ -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 02/25] net/intel/common: add common flow attr validation 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 01/25] net/intel/common: add common flow action parsing Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 03/25] net/ixgbe: use common checks in ethertype filter Anatoly Burakov ` (23 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson There are a lot of commonalities between what kinds of flow attr each Intel driver supports. Add a helper function that will validate attr based on common requirements and (optional) parameter checks. Things we check for: - Rejecting NULL attr (obviously) - Default to ingress flows - Transfer, group, priority, and egress are not allowed unless requested Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/common/flow_check.h | 70 +++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/drivers/net/intel/common/flow_check.h b/drivers/net/intel/common/flow_check.h index 69b1b60ffc..9df7c39a7e 100644 --- a/drivers/net/intel/common/flow_check.h +++ b/drivers/net/intel/common/flow_check.h @@ -53,6 +53,7 @@ ci_flow_action_type_in_list(const enum rte_flow_action_type type, /* Forward declarations */ struct ci_flow_actions; struct ci_flow_actions_check_param; +struct ci_flow_attr_check_param; /** * Driver-specific action list validation callback. @@ -230,6 +231,75 @@ ci_flow_check_actions(const struct rte_flow_action *actions, return parsed_actions->count == 0 ? -EINVAL : 0; } +/** + * Parameter structure for attr check. + */ +struct ci_flow_attr_check_param { + bool allow_priority; /**< True if priority attribute is allowed. */ + bool allow_transfer; /**< True if transfer attribute is allowed. */ + bool allow_group; /**< True if group attribute is allowed. */ + bool expect_egress; /**< True if egress attribute is expected. */ +}; + +/** + * Validate rte_flow_attr structure against specified constraints. + * + * @param attr Pointer to rte_flow_attr structure to validate. + * @param attr_param Pointer to ci_flow_attr_check_param structure specifying constraints. + * @param error Pointer to rte_flow_error structure for error reporting. + * + * @return 0 on success, negative errno on failure. + */ +__rte_internal +static inline int +ci_flow_check_attr(const struct rte_flow_attr *attr, + const struct ci_flow_attr_check_param *attr_param, + struct rte_flow_error *error) +{ + if (attr == NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, attr, + "NULL attribute"); + } + + /* Direction must be either ingress or egress */ + if (attr->ingress == attr->egress) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, attr, + "Either ingress or egress must be set"); + } + + /* Expect ingress by default */ + if (attr->egress && (attr_param == NULL || !attr_param->expect_egress)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, attr, + "Egress not supported"); + } + + /* May not be supported */ + if (attr->transfer && (attr_param == NULL || !attr_param->allow_transfer)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, attr, + "Transfer not supported"); + } + + /* May not be supported */ + if (attr->group && (attr_param == NULL || !attr_param->allow_group)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, attr, + "Group not supported"); + } + + /* May not be supported */ + if (attr->priority && (attr_param == NULL || !attr_param->allow_priority)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, attr, + "Priority not supported"); + } + + return 0; +} + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 03/25] net/ixgbe: use common checks in ethertype filter 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 01/25] net/intel/common: add common flow action parsing Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 02/25] net/intel/common: add common flow attr validation Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 04/25] net/ixgbe: use common checks in syn filter Anatoly Burakov ` (22 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in ethertype. This allows us to remove some checks as they are no longer necessary (such as whether DROP flag was set - if we do not accept DROP actions, we do not set the DROP flag), as well as make some checks more stringent (such as rejecting more than one action). Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 190 ++++++++++----------------- 1 file changed, 73 insertions(+), 117 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 01cd4f9bde..e41e070ee7 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -43,6 +43,8 @@ #include "base/ixgbe_phy.h" #include "rte_pmd_ixgbe.h" +#include "../common/flow_check.h" + #define IXGBE_MIN_N_TUPLE_PRIO 1 #define IXGBE_MAX_N_TUPLE_PRIO 7 @@ -133,6 +135,41 @@ const struct rte_flow_action *next_no_void_action( } } +/* + * All ixgbe engines mostly check the same stuff, so use a common check. + */ +static int +ixgbe_flow_actions_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action *action; + struct rte_eth_dev *dev = (struct rte_eth_dev *)param->driver_ctx; + size_t idx; + + for (idx = 0; idx < actions->count; idx++) { + action = actions->actions[idx]; + + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *queue = action->conf; + if (queue->index >= dev->data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "queue index out of range"); + } + break; + } + default: + /* no specific validation */ + break; + } + } + return 0; +} + /** * Please be aware there's an assumption for all the parsers. * rte_flow_item is using big endian, rte_flow_attr and @@ -743,38 +780,14 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev, * item->last should be NULL. */ static int -cons_parse_ethertype_filter(const struct rte_flow_attr *attr, - const struct rte_flow_item *pattern, - const struct rte_flow_action *actions, - struct rte_eth_ethertype_filter *filter, - struct rte_flow_error *error) +cons_parse_ethertype_filter(const struct rte_flow_item *pattern, + const struct rte_flow_action *action, + struct rte_eth_ethertype_filter *filter, + struct rte_flow_error *error) { const struct rte_flow_item *item; - const struct rte_flow_action *act; const struct rte_flow_item_eth *eth_spec; const struct rte_flow_item_eth *eth_mask; - const struct rte_flow_action_queue *act_q; - - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } item = next_no_void_pattern(pattern, NULL); /* The first non-void item should be MAC. */ @@ -844,87 +857,30 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr, return -rte_errno; } - /* Parse action */ - - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE && - act->type != RTE_FLOW_ACTION_TYPE_DROP) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = (const struct rte_flow_action_queue *)act->conf; - filter->queue = act_q->index; - } else { - filter->flags |= RTE_ETHTYPE_FLAGS_DROP; - } - - /* Check if the next non-void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* Parse attr */ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->transfer) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - /* Not supported */ - if (attr->priority) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "Not support group."); - return -rte_errno; - } + filter->queue = ((const struct rte_flow_action_queue *)action->conf)->index; return 0; } static int -ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_eth_ethertype_filter *filter, - struct rte_flow_error *error) +ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_eth_ethertype_filter *filter, struct rte_flow_error *error) { int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only queue is allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .driver_ctx = dev, + .check = ixgbe_flow_actions_check + }; + const struct rte_flow_action *action; if (hw->mac.type != ixgbe_mac_82599EB && hw->mac.type != ixgbe_mac_X540 && @@ -934,19 +890,27 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, hw->mac.type != ixgbe_mac_E610) return -ENOTSUP; - ret = cons_parse_ethertype_filter(attr, pattern, - actions, filter, error); + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); if (ret) return ret; - if (filter->queue >= dev->data->nb_rx_queues) { - memset(filter, 0, sizeof(struct rte_eth_ethertype_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "queue index much too big"); - return -rte_errno; + /* only one action is supported */ + if (parsed_actions.count > 1) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + parsed_actions.actions[1], + "Only one action can be specified at a time"); } + action = parsed_actions.actions[0]; + + ret = cons_parse_ethertype_filter(pattern, action, filter, error); + if (ret) + return ret; if (filter->ether_type == RTE_ETHER_TYPE_IPV4 || filter->ether_type == RTE_ETHER_TYPE_IPV6) { @@ -965,14 +929,6 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) { - memset(filter, 0, sizeof(struct rte_eth_ethertype_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "drop option is unsupported"); - return -rte_errno; - } - return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 04/25] net/ixgbe: use common checks in syn filter 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (2 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 03/25] net/ixgbe: use common checks in ethertype filter Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 05/25] net/ixgbe: use common checks in L2 tunnel filter Anatoly Burakov ` (21 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in syn filter. Some checks have been rearranged or become more stringent due to using common infrastructure. In particular, group attr was ignored previously but is now explicitly rejected. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 130 +++++++-------------------- 1 file changed, 32 insertions(+), 98 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index e41e070ee7..220b89f8fc 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -953,38 +953,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, const struct rte_flow_attr * item->last should be NULL. */ static int -cons_parse_syn_filter(const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_eth_syn_filter *filter, - struct rte_flow_error *error) +cons_parse_syn_filter(const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], + const struct rte_flow_action_queue *q_act, struct rte_eth_syn_filter *filter, + struct rte_flow_error *error) { const struct rte_flow_item *item; - const struct rte_flow_action *act; const struct rte_flow_item_tcp *tcp_spec; const struct rte_flow_item_tcp *tcp_mask; - const struct rte_flow_action_queue *act_q; - - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } /* the first not void item should be MAC or IPv4 or IPv6 or TCP */ @@ -1092,63 +1067,7 @@ cons_parse_syn_filter(const struct rte_flow_attr *attr, return -rte_errno; } - /* check if the first not void action is QUEUE. */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - act_q = (const struct rte_flow_action_queue *)act->conf; - filter->queue = act_q->index; - if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } + filter->queue = q_act->index; /* Support 2 priorities, the lowest or highest. */ if (!attr->priority) { @@ -1159,7 +1078,7 @@ cons_parse_syn_filter(const struct rte_flow_attr *attr, memset(filter, 0, sizeof(struct rte_eth_syn_filter)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); + attr, "Priority can be 0 or 0xFFFFFFFF"); return -rte_errno; } @@ -1167,15 +1086,27 @@ cons_parse_syn_filter(const struct rte_flow_attr *attr, } static int -ixgbe_parse_syn_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_eth_syn_filter *filter, - struct rte_flow_error *error) +ixgbe_parse_syn_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_eth_syn_filter *filter, struct rte_flow_error *error) { int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only queue is allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev, + .check = ixgbe_flow_actions_check, + .max_actions = 1, + }; + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; + const struct rte_flow_action *action; if (hw->mac.type != ixgbe_mac_82599EB && hw->mac.type != ixgbe_mac_X540 && @@ -1185,16 +1116,19 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev, hw->mac.type != ixgbe_mac_E610) return -ENOTSUP; - ret = cons_parse_syn_filter(attr, pattern, - actions, filter, error); - - if (filter->queue >= dev->data->nb_rx_queues) - return -rte_errno; + /* validate attributes */ + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + return ret; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); if (ret) return ret; - return 0; + action = parsed_actions.actions[0]; + + return cons_parse_syn_filter(attr, pattern, action->conf, filter, error); } /** -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 05/25] net/ixgbe: use common checks in L2 tunnel filter 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (3 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 04/25] net/ixgbe: use common checks in syn filter Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 06/25] net/ixgbe: use common checks in ntuple filter Anatoly Burakov ` (20 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in L2 tunnel filter. Some checks have become more stringent as a result, in particular, group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 148 +++++++++------------------ 1 file changed, 47 insertions(+), 101 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 220b89f8fc..383d84fb92 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -145,6 +145,7 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions, { const struct rte_flow_action *action; struct rte_eth_dev *dev = (struct rte_eth_dev *)param->driver_ctx; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); size_t idx; for (idx = 0; idx < actions->count; idx++) { @@ -162,6 +163,17 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions, } break; } + case RTE_FLOW_ACTION_TYPE_VF: + { + const struct rte_flow_action_vf *vf = action->conf; + if (vf->id >= pci_dev->max_vfs) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "VF id out of range"); + } + break; + } default: /* no specific validation */ break; @@ -900,12 +912,6 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, const struct rte_flow_attr if (ret) return ret; - /* only one action is supported */ - if (parsed_actions.count > 1) { - return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - parsed_actions.actions[1], - "Only one action can be specified at a time"); - } action = parsed_actions.actions[0]; ret = cons_parse_ethertype_filter(pattern, action, filter, error); @@ -1151,40 +1157,16 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr */ static int cons_parse_l2_tn_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct rte_flow_action *action, struct ixgbe_l2_tunnel_conf *filter, struct rte_flow_error *error) { const struct rte_flow_item *item; const struct rte_flow_item_e_tag *e_tag_spec; const struct rte_flow_item_e_tag *e_tag_mask; - const struct rte_flow_action *act; - const struct rte_flow_action_vf *act_vf; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } - /* The first not void item should be e-tag. */ item = next_no_void_pattern(pattern, NULL); if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) { @@ -1242,71 +1224,13 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev, return -rte_errno; } - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - /* not supported */ - if (attr->priority) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* check if the first not void action is VF or PF. */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_VF && - act->type != RTE_FLOW_ACTION_TYPE_PF) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - if (act->type == RTE_FLOW_ACTION_TYPE_VF) { - act_vf = (const struct rte_flow_action_vf *)act->conf; + if (action->type == RTE_FLOW_ACTION_TYPE_VF) { + const struct rte_flow_action_vf *act_vf = action->conf; filter->pool = act_vf->id; } else { filter->pool = pci_dev->max_vfs; } - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - return 0; } @@ -1318,29 +1242,51 @@ ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev, struct ixgbe_l2_tunnel_conf *l2_tn_filter, struct rte_flow_error *error) { - int ret = 0; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - uint16_t vf_num; - - ret = cons_parse_l2_tn_filter(dev, attr, pattern, - actions, l2_tn_filter, error); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only vf/pf is allowed here */ + RTE_FLOW_ACTION_TYPE_VF, + RTE_FLOW_ACTION_TYPE_PF, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev, + .check = ixgbe_flow_actions_check, + .max_actions = 1, + }; + int ret = 0; + const struct rte_flow_action *action; if (hw->mac.type != ixgbe_mac_X550 && hw->mac.type != ixgbe_mac_X550EM_x && hw->mac.type != ixgbe_mac_X550EM_a && hw->mac.type != ixgbe_mac_E610) { - memset(l2_tn_filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "Not supported by L2 tunnel filter"); return -rte_errno; } - vf_num = pci_dev->max_vfs; + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; - if (l2_tn_filter->pool > vf_num) - return -rte_errno; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + + /* only one action is supported */ + if (parsed_actions.count > 1) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + parsed_actions.actions[1], + "Only one action can be specified at a time"); + } + action = parsed_actions.actions[0]; + + ret = cons_parse_l2_tn_filter(dev, pattern, action, l2_tn_filter, error); return ret; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 06/25] net/ixgbe: use common checks in ntuple filter 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (4 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 05/25] net/ixgbe: use common checks in L2 tunnel filter Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 07/25] net/ixgbe: use common checks in security filter Anatoly Burakov ` (19 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in ntuple filter. As a result, some checks have become more stringent, in particular the group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 133 ++++++++------------------- 1 file changed, 36 insertions(+), 97 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 383d84fb92..20d163bce0 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -219,12 +219,11 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions, static int cons_parse_ntuple_filter(const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct rte_flow_action_queue *q_act, struct rte_eth_ntuple_filter *filter, struct rte_flow_error *error) { const struct rte_flow_item *item; - const struct rte_flow_action *act; const struct rte_flow_item_ipv4 *ipv4_spec; const struct rte_flow_item_ipv4 *ipv4_mask; const struct rte_flow_item_tcp *tcp_spec; @@ -240,24 +239,11 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr, struct rte_flow_item_eth eth_null; struct rte_flow_item_vlan vlan_null; - if (!pattern) { - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; + /* Priority must be 16-bit */ + if (attr->priority > UINT16_MAX) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, attr, + "Priority must be 16-bit"); } memset(ð_null, 0, sizeof(struct rte_flow_item_eth)); @@ -553,70 +539,11 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr, action: - /** - * n-tuple only supports forwarding, - * check if the first not void action is QUEUE. - */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - item, "Not supported action."); - return -rte_errno; - } - filter->queue = - ((const struct rte_flow_action_queue *)act->conf)->index; + filter->queue = q_act->index; - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - if (attr->priority > 0xFFFF) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Error priority."); - return -rte_errno; - } filter->priority = (uint16_t)attr->priority; - if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO || - attr->priority > IXGBE_MAX_N_TUPLE_PRIO) - filter->priority = 1; + if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO || attr->priority > IXGBE_MAX_N_TUPLE_PRIO) + filter->priority = 1; return 0; } @@ -736,15 +663,40 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev, struct rte_eth_ntuple_filter *filter, struct rte_flow_error *error) { - int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only queue is allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev, + .check = ixgbe_flow_actions_check, + .max_actions = 1, + }; + const struct rte_flow_action *action; + int ret; if (hw->mac.type != ixgbe_mac_82599EB && hw->mac.type != ixgbe_mac_X540) return -ENOTSUP; - ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error); + /* validate attributes */ + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + return ret; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + action = parsed_actions.actions[0]; + + ret = cons_parse_ntuple_filter(attr, pattern, action->conf, filter, error); if (ret) return ret; @@ -757,19 +709,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev, return -rte_errno; } - /* Ixgbe doesn't support many priorities. */ - if (filter->priority < IXGBE_MIN_N_TUPLE_PRIO || - filter->priority > IXGBE_MAX_N_TUPLE_PRIO) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "Priority not supported by ntuple filter"); - return -rte_errno; - } - - if (filter->queue >= dev->data->nb_rx_queues) - return -rte_errno; - /* fixed value for ixgbe */ filter->flags = RTE_5TUPLE_FLAGS; return 0; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 07/25] net/ixgbe: use common checks in security filter 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (5 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 06/25] net/ixgbe: use common checks in ntuple filter Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 08/25] net/ixgbe: use common checks in FDIR filters Anatoly Burakov ` (18 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in security filter. As a result, some checks have become more stringent. In particular, group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 62 ++++++++++------------------ 1 file changed, 22 insertions(+), 40 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 20d163bce0..9dc2ad5e56 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -557,7 +557,16 @@ ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr const struct rte_flow_action_security *security; struct rte_security_session *session; const struct rte_flow_item *item; - const struct rte_flow_action *act; + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only security is allowed here */ + RTE_FLOW_ACTION_TYPE_SECURITY, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + }; + const struct rte_flow_action *action; struct ip_spec spec; int ret; @@ -569,45 +578,18 @@ ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr hw->mac.type != ixgbe_mac_E610) return -ENOTSUP; - if (pattern == NULL) { - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - if (actions == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - if (attr == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; - /* check if next non-void action is security */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - } - security = act->conf; - if (security == NULL) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "NULL security action config."); - } - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - } + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + + action = parsed_actions.actions[0]; + security = action->conf; /* get the IP pattern*/ item = next_no_void_pattern(pattern, NULL); @@ -647,7 +629,7 @@ ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr ret = ixgbe_crypto_add_ingress_sa_from_flow(session, &spec); if (ret) { rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ACTION, act, + RTE_FLOW_ERROR_TYPE_ACTION, action, "Failed to add security session."); return -rte_errno; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 08/25] net/ixgbe: use common checks in FDIR filters 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (6 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 07/25] net/ixgbe: use common checks in security filter Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 09/25] net/ixgbe: use common checks in RSS filter Anatoly Burakov ` (17 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in flow director filters (both tunnel and normal). As a result, some checks have become more stringent, in particular group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 288 ++++++++++++--------------- 1 file changed, 126 insertions(+), 162 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 9dc2ad5e56..b718c72125 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -1212,111 +1212,6 @@ ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev, return ret; } -/* Parse to get the attr and action info of flow director rule. */ -static int -ixgbe_parse_fdir_act_attr(const struct rte_flow_attr *attr, - const struct rte_flow_action actions[], - struct ixgbe_fdir_rule *rule, - struct rte_flow_error *error) -{ - const struct rte_flow_action *act; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark; - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - /* not supported */ - if (attr->priority) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* check if the first not void action is QUEUE or DROP. */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE && - act->type != RTE_FLOW_ACTION_TYPE_DROP) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = (const struct rte_flow_action_queue *)act->conf; - rule->queue = act_q->index; - } else { /* drop */ - /* signature mode does not support drop action. */ - if (rule->mode == RTE_FDIR_MODE_SIGNATURE) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - rule->fdirflags = IXGBE_FDIRCMD_DROP; - } - - /* check if the next not void item is MARK */ - act = next_no_void_action(actions, act); - if ((act->type != RTE_FLOW_ACTION_TYPE_MARK) && - (act->type != RTE_FLOW_ACTION_TYPE_END)) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - rule->soft_id = 0; - - if (act->type == RTE_FLOW_ACTION_TYPE_MARK) { - mark = (const struct rte_flow_action_mark *)act->conf; - rule->soft_id = mark->id; - act = next_no_void_action(actions, act); - } - - /* check if the next not void item is END */ - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - return 0; -} - /* search next no void pattern and skip fuzzy */ static inline const struct rte_flow_item *next_no_fuzzy_pattern( @@ -1423,9 +1318,8 @@ static inline uint8_t signature_match(const struct rte_flow_item pattern[]) */ static int ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct ci_flow_actions *parsed_actions, struct ixgbe_fdir_rule *rule, struct rte_flow_error *error) { @@ -1446,29 +1340,39 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, const struct rte_flow_item_vlan *vlan_mask; const struct rte_flow_item_raw *raw_mask; const struct rte_flow_item_raw *raw_spec; + const struct rte_flow_action *fwd_action, *aux_action; + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint8_t j; - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + fwd_action = parsed_actions->actions[0]; + /* can be NULL */ + aux_action = parsed_actions->actions[1]; - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } + /* check if this is a signature match */ + if (signature_match(pattern)) + rule->mode = RTE_FDIR_MODE_SIGNATURE; + else + rule->mode = RTE_FDIR_MODE_PERFECT; - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; + /* set up action */ + if (fwd_action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *q_act = fwd_action->conf; + rule->queue = q_act->index; + } else { + /* signature mode does not support drop action. */ + if (rule->mode == RTE_FDIR_MODE_SIGNATURE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, fwd_action, + "Signature mode does not support drop action."); + return -rte_errno; + } + rule->fdirflags = IXGBE_FDIRCMD_DROP; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; + /* set up mark action */ + if (aux_action != NULL && aux_action->type == RTE_FLOW_ACTION_TYPE_MARK) { + const struct rte_flow_action_mark *m_act = aux_action->conf; + rule->soft_id = m_act->id; } /** @@ -1500,11 +1404,6 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, return -rte_errno; } - if (signature_match(pattern)) - rule->mode = RTE_FDIR_MODE_SIGNATURE; - else - rule->mode = RTE_FDIR_MODE_PERFECT; - /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -2093,7 +1992,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, } } - return ixgbe_parse_fdir_act_attr(attr, actions, rule, error); + return 0; } #define NVGRE_PROTOCOL 0x6558 @@ -2136,9 +2035,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, * item->last should be NULL. */ static int -ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], +ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_item pattern[], + const struct ci_flow_actions *parsed_actions, struct ixgbe_fdir_rule *rule, struct rte_flow_error *error) { @@ -2151,27 +2049,25 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, const struct rte_flow_item_eth *eth_mask; const struct rte_flow_item_vlan *vlan_spec; const struct rte_flow_item_vlan *vlan_mask; + const struct rte_flow_action *fwd_action, *aux_action; uint32_t j; - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } + fwd_action = parsed_actions->actions[0]; + /* can be NULL */ + aux_action = parsed_actions->actions[1]; - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; + /* set up queue/drop action */ + if (fwd_action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *q_act = fwd_action->conf; + rule->queue = q_act->index; + } else { + rule->fdirflags = IXGBE_FDIRCMD_DROP; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; + /* set up mark action */ + if (aux_action != NULL && aux_action->type == RTE_FLOW_ACTION_TYPE_MARK) { + const struct rte_flow_action_mark *mark = aux_action->conf; + rule->soft_id = mark->id; } /** @@ -2582,7 +2478,54 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, * Do nothing. */ - return ixgbe_parse_fdir_act_attr(attr, actions, rule, error); + return 0; +} + +/* + * Check flow director actions + */ +static int +ixgbe_fdir_actions_check(const struct ci_flow_actions *parsed_actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) +{ + const enum rte_flow_action_type fwd_actions[] = { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END + }; + const struct rte_flow_action *action, *drop_action = NULL; + + /* do the generic checks first */ + int ret = ixgbe_flow_actions_check(parsed_actions, param, error); + if (ret) + return ret; + + /* first action must be a forwarding action */ + action = parsed_actions->actions[0]; + if (!ci_flow_action_type_in_list(action->type, fwd_actions)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "First action must be QUEUE or DROP"); + } + /* remember if we have a drop action */ + if (action->type == RTE_FLOW_ACTION_TYPE_DROP) + drop_action = action; + + /* second action, if specified, must not be a forwarding action */ + action = parsed_actions->actions[1]; + if (action != NULL && ci_flow_action_type_in_list(action->type, fwd_actions)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Conflicting actions"); + } + /* if we didn't have a drop action before but now we do, remember that */ + if (drop_action == NULL && action != NULL && action->type == RTE_FLOW_ACTION_TYPE_DROP) + drop_action = action; + /* drop must be the only action */ + if (drop_action != NULL && action != NULL) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Conflicting actions"); + } + return 0; } static int @@ -2596,24 +2539,45 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev, int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* queue/mark/drop allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev, + .check = ixgbe_fdir_actions_check + }; + + if (hw->mac.type != ixgbe_mac_82599EB && + hw->mac.type != ixgbe_mac_X540 && + hw->mac.type != ixgbe_mac_X550 && + hw->mac.type != ixgbe_mac_X550EM_x && + hw->mac.type != ixgbe_mac_X550EM_a && + hw->mac.type != ixgbe_mac_E610) + return -ENOTSUP; + + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + fdir_conf->drop_queue = IXGBE_FDIR_DROP_QUEUE; - if (hw->mac.type != ixgbe_mac_82599EB && - hw->mac.type != ixgbe_mac_X540 && - hw->mac.type != ixgbe_mac_X550 && - hw->mac.type != ixgbe_mac_X550EM_x && - hw->mac.type != ixgbe_mac_X550EM_a && - hw->mac.type != ixgbe_mac_E610) - return -ENOTSUP; - - ret = ixgbe_parse_fdir_filter_normal(dev, attr, pattern, - actions, rule, error); + ret = ixgbe_parse_fdir_filter_normal(dev, pattern, &parsed_actions, rule, error); if (!ret) goto step_next; - ret = ixgbe_parse_fdir_filter_tunnel(attr, pattern, - actions, rule, error); + ret = ixgbe_parse_fdir_filter_tunnel(pattern, &parsed_actions, rule, error); if (ret) return ret; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 09/25] net/ixgbe: use common checks in RSS filter 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (7 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 08/25] net/ixgbe: use common checks in FDIR filters Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 10/25] net/i40e: use common flow attribute checks Anatoly Burakov ` (16 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in RSS filter. As a result, some checks have become more stringent, in particular: - the group attribute is now explicitly rejected instead of being ignored - RSS now explicitly rejects unsupported RSS types at validation - the priority attribute was previously allowed but rejected values bigger than 0xFFFF despite not using priority anywhere - it is now rejected Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 202 +++++++++++---------------- 1 file changed, 85 insertions(+), 117 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index b718c72125..ff86efe163 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -121,20 +121,6 @@ const struct rte_flow_item *next_no_void_pattern( } } -static inline -const struct rte_flow_action *next_no_void_action( - const struct rte_flow_action actions[], - const struct rte_flow_action *cur) -{ - const struct rte_flow_action *next = - cur ? cur + 1 : &actions[0]; - while (1) { - if (next->type != RTE_FLOW_ACTION_TYPE_VOID) - return next; - next++; - } -} - /* * All ixgbe engines mostly check the same stuff, so use a common check. */ @@ -2607,6 +2593,62 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev, return ret; } +/* Flow actions check specific to RSS filter */ +static int +ixgbe_flow_actions_check_rss(const struct ci_flow_actions *parsed_actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action *action = parsed_actions->actions[0]; + const struct rte_flow_action_rss *rss_act = action->conf; + struct rte_eth_dev *dev = param->driver_ctx; + const size_t rss_key_len = sizeof(((struct ixgbe_rte_flow_rss_conf *)0)->key); + size_t q_idx, q; + + /* check if queue list is not empty */ + if (rss_act->queue_num == 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queue list is empty"); + } + + /* check if each RSS queue is valid */ + for (q_idx = 0; q_idx < rss_act->queue_num; q_idx++) { + q = rss_act->queue[q_idx]; + if (q >= dev->data->nb_rx_queues) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS queue specified"); + } + } + + /* only support default hash function */ + if (rss_act->func != RTE_ETH_HASH_FUNCTION_DEFAULT) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Non-default RSS hash functions are not supported"); + } + /* levels aren't supported */ + if (rss_act->level) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "A nonzero RSS encapsulation level is not supported"); + } + /* check key length */ + if (rss_act->key_len != 0 && rss_act->key_len != rss_key_len) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key must be exactly 40 bytes long"); + } + /* filter out unsupported RSS types */ + if ((rss_act->types & ~IXGBE_RSS_OFFLOAD_ALL) != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS type specified"); + } + return 0; +} + static int ixgbe_parse_rss_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -2614,109 +2656,35 @@ ixgbe_parse_rss_filter(struct rte_eth_dev *dev, struct ixgbe_rte_flow_rss_conf *rss_conf, struct rte_flow_error *error) { - const struct rte_flow_action *act; - const struct rte_flow_action_rss *rss; - uint16_t n; - - /** - * rss only supports forwarding, - * check if the first not void action is RSS. - */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_RSS) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - rss = (const struct rte_flow_action_rss *)act->conf; - - if (!rss || !rss->queue_num) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, - "no valid queues"); - return -rte_errno; - } - - for (n = 0; n < rss->queue_num; n++) { - if (rss->queue[n] >= dev->data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, - "queue id > max number of queues"); - return -rte_errno; - } - } - - if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "non-default RSS hash functions are not supported"); - if (rss->level) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "a nonzero RSS encapsulation level is not supported"); - if (rss->key_len && rss->key_len != RTE_DIM(rss_conf->key)) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "RSS hash key must be exactly 40 bytes"); - if (rss->queue_num > RTE_DIM(rss_conf->queue)) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "too many queues for RSS context"); - if (ixgbe_rss_conf_init(rss_conf, rss)) - return rte_flow_error_set - (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, act, - "RSS context initialization failure"); - - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - if (attr->priority > 0xFFFF) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Error priority."); - return -rte_errno; - } + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only rss allowed here */ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev, + .check = ixgbe_flow_actions_check_rss, + .max_actions = 1, + }; + int ret; + const struct rte_flow_action *action; + + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + action = parsed_actions.actions[0]; + + if (ixgbe_rss_conf_init(rss_conf, action->conf)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "RSS context initialization failure"); return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 10/25] net/i40e: use common flow attribute checks 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (8 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 09/25] net/ixgbe: use common checks in RSS filter Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 11/25] net/i40e: refactor RSS flow parameter checks Anatoly Burakov ` (15 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson There is no need to call the same attribute checks in multiple places when there are no cases where we do otherwise. Therefore, remove all the attribute check calls from all filter call paths and instead check the attributes once at the very beginning of flow validation, and use common flow attribute checks instead of driver-local ones. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_ethdev.h | 1 - drivers/net/intel/i40e/i40e_flow.c | 121 +++------------------------ 2 files changed, 10 insertions(+), 112 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h index d57c53f661..91ad0f8d0e 100644 --- a/drivers/net/intel/i40e/i40e_ethdev.h +++ b/drivers/net/intel/i40e/i40e_ethdev.h @@ -1315,7 +1315,6 @@ struct i40e_filter_ctx { }; typedef int (*parse_filter_t)(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 84cfddb92d..4d21df327b 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -26,6 +26,8 @@ #include "i40e_ethdev.h" #include "i40e_hash.h" +#include "../common/flow_check.h" + #define I40E_IPV6_TC_MASK (0xFF << I40E_FDIR_IPv6_TC_OFFSET) #define I40E_IPV6_FRAG_HEADER 44 #define I40E_TENANT_ARRAY_NUM 3 @@ -74,40 +76,32 @@ static int i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev, const struct rte_flow_action *actions, struct rte_flow_error *error, struct i40e_tunnel_filter_conf *filter); -static int i40e_flow_parse_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error); static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -121,7 +115,6 @@ static int i40e_flow_flush_ethertype_filter(struct i40e_pf *pf); static int i40e_flow_flush_tunnel_filter(struct i40e_pf *pf); static int i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -133,7 +126,6 @@ i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev, struct i40e_tunnel_filter_conf *filter); static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -1211,54 +1203,6 @@ i40e_find_parse_filter_func(struct rte_flow_item *pattern, uint32_t *idx) return parse_filter; } -/* Parse attributes */ -static int -i40e_flow_parse_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error) -{ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->transfer) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "Not support transfer."); - return -rte_errno; - } - - /* Not supported */ - if (attr->priority) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "Not support group."); - return -rte_errno; - } - - return 0; -} - static int i40e_get_outer_vlan(struct rte_eth_dev *dev, uint16_t *tpid) { @@ -1446,7 +1390,6 @@ i40e_flow_parse_ethertype_action(struct rte_eth_dev *dev, static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -1465,10 +1408,6 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_ETHERTYPE; return ret; @@ -2564,7 +2503,6 @@ i40e_flow_parse_fdir_action(struct rte_eth_dev *dev, static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -2581,10 +2519,6 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_FDIR; return 0; @@ -2849,7 +2783,6 @@ i40e_flow_parse_l4_pattern(const struct rte_flow_item *pattern, static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -2866,10 +2799,6 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3100,7 +3029,6 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3118,10 +3046,6 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3351,7 +3275,6 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3369,10 +3292,6 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3507,7 +3426,6 @@ i40e_flow_parse_mpls_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3525,10 +3443,6 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3659,7 +3573,6 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev, static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3677,10 +3590,6 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3776,7 +3685,6 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3794,10 +3702,6 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3816,7 +3720,12 @@ i40e_flow_check(struct rte_eth_dev *dev, uint32_t item_num = 0; /* non-void item number of pattern*/ uint32_t i = 0; bool flag = false; - int ret = I40E_NOT_SUPPORTED; + int ret; + + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) { + return ret; + } if (!pattern) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, @@ -3831,22 +3740,11 @@ i40e_flow_check(struct rte_eth_dev *dev, return -rte_errno; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } - /* Get the non-void item of action */ while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID) i++; if ((actions + i)->type == RTE_FLOW_ACTION_TYPE_RSS) { - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter_ctx->type = RTE_ETH_FILTER_HASH; return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error); } @@ -3871,6 +3769,7 @@ i40e_flow_check(struct rte_eth_dev *dev, i40e_pattern_skip_void_item(items, pattern); i = 0; + ret = I40E_NOT_SUPPORTED; do { parse_filter = i40e_find_parse_filter_func(items, &i); if (!parse_filter && !flag) { @@ -3883,7 +3782,7 @@ i40e_flow_check(struct rte_eth_dev *dev, } if (parse_filter) - ret = parse_filter(dev, attr, items, actions, error, filter_ctx); + ret = parse_filter(dev, items, actions, error, filter_ctx); flag = true; } while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns))); -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 11/25] net/i40e: refactor RSS flow parameter checks 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (9 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 10/25] net/i40e: use common flow attribute checks Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 12/25] net/i40e: use common action checks for ethertype Anatoly Burakov ` (14 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson Currently, the hash parser parameter checks are somewhat confusing as they have multiple mutually exclusive code paths and requirements, and it's difficult to reason about them because RSS flow parsing is interspersed with validation checks. To address that, refactor hash engine error checking to perform almost all validation at the beginning, with only happy paths being implemented in actual parsing functions. Some parameter combinations that were previously ignored (and perhaps produced a warning) are now explicitly rejected: - if no pattern is specified, RSS types are rejected - for queue lists and regions, RSS key is rejected Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 14 +- drivers/net/intel/i40e/i40e_hash.c | 425 +++++++++++++++++------------ drivers/net/intel/i40e/i40e_hash.h | 2 +- 3 files changed, 262 insertions(+), 179 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 4d21df327b..191bbd41a4 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -3726,6 +3726,7 @@ i40e_flow_check(struct rte_eth_dev *dev, if (ret) { return ret; } + /* action and pattern validation will happen in each respective engine */ if (!pattern) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, @@ -3740,14 +3741,11 @@ i40e_flow_check(struct rte_eth_dev *dev, return -rte_errno; } - /* Get the non-void item of action */ - while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID) - i++; - - if ((actions + i)->type == RTE_FLOW_ACTION_TYPE_RSS) { - filter_ctx->type = RTE_ETH_FILTER_HASH; - return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error); - } + /* try parsing as RSS */ + filter_ctx->type = RTE_ETH_FILTER_HASH; + ret = i40e_hash_parse(dev, pattern, actions, &filter_ctx->rss_conf, error); + if (!ret) + return ret; i = 0; /* Get the non-void item number of pattern */ diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c index 5756ebf255..6596becadd 100644 --- a/drivers/net/intel/i40e/i40e_hash.c +++ b/drivers/net/intel/i40e/i40e_hash.c @@ -16,6 +16,8 @@ #include "i40e_ethdev.h" #include "i40e_hash.h" +#include "../common/flow_check.h" + #ifndef BIT #define BIT(n) (1UL << (n)) #endif @@ -925,12 +927,7 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act, { const uint8_t *key = rss_act->key; - if (!key || rss_act->key_len != sizeof(rss_conf->key)) { - if (rss_act->key_len != sizeof(rss_conf->key)) - PMD_DRV_LOG(WARNING, - "RSS key length invalid, must be %u bytes, now set key to default", - (uint32_t)sizeof(rss_conf->key)); - + if (key == NULL) { memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key)); } else { memcpy(rss_conf->key, key, sizeof(rss_conf->key)); @@ -941,45 +938,29 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act, } static int -i40e_hash_parse_queues(const struct rte_eth_dev *dev, - const struct rte_flow_action_rss *rss_act, - struct i40e_rte_flow_rss_conf *rss_conf, - struct rte_flow_error *error) +i40e_hash_parse_pattern_act(const struct rte_eth_dev *dev, + const struct rte_flow_item pattern[], + const struct rte_flow_action_rss *rss_act, + struct i40e_rte_flow_rss_conf *rss_conf, + struct rte_flow_error *error) { - struct i40e_pf *pf; - struct i40e_hw *hw; - uint16_t i; - uint16_t max_queue; - - hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if (!rss_act->queue_num || - rss_act->queue_num > hw->func_caps.rss_table_size) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Invalid RSS queue number"); + rss_conf->symmetric_enable = rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ; if (rss_act->key_len) - PMD_DRV_LOG(WARNING, - "RSS key is ignored when queues specified"); + i40e_hash_parse_key(rss_act, rss_conf); - pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) - max_queue = i40e_pf_calc_configured_queues_num(pf); - else - max_queue = pf->dev_data->nb_rx_queues; + rss_conf->conf.func = rss_act->func; + rss_conf->conf.types = rss_act->types; + rss_conf->inset = i40e_hash_get_inset(rss_act->types, rss_conf->symmetric_enable); - max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC); - - for (i = 0; i < rss_act->queue_num; i++) { - if (rss_act->queue[i] >= max_queue) - break; - } - - if (i < rss_act->queue_num) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Invalid RSS queues"); + return i40e_hash_get_pattern_pctypes(dev, pattern, rss_act, + rss_conf, error); +} +static int +i40e_hash_parse_queues(const struct rte_flow_action_rss *rss_act, + struct i40e_rte_flow_rss_conf *rss_conf) +{ memcpy(rss_conf->queue, rss_act->queue, rss_act->queue_num * sizeof(rss_conf->queue[0])); rss_conf->conf.queue = rss_conf->queue; @@ -988,113 +969,38 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev, } static int -i40e_hash_parse_queue_region(const struct rte_eth_dev *dev, - const struct rte_flow_item pattern[], +i40e_hash_parse_queue_region(const struct rte_flow_item pattern[], const struct rte_flow_action_rss *rss_act, struct i40e_rte_flow_rss_conf *rss_conf, struct rte_flow_error *error) { - struct i40e_pf *pf; const struct rte_flow_item_vlan *vlan_spec, *vlan_mask; - uint64_t hash_queues; - uint32_t i; - - if (pattern[1].type != RTE_FLOW_ITEM_TYPE_END) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - &pattern[1], - "Pattern not supported."); vlan_spec = pattern->spec; vlan_mask = pattern->mask; - if (!vlan_spec || !vlan_mask || - (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, pattern, - "Pattern error."); - if (!rss_act->queue) + /* VLAN must have spec and mask */ + if (vlan_spec == NULL || vlan_mask == NULL) { return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Queues not specified"); - - if (rss_act->key_len) - PMD_DRV_LOG(WARNING, - "RSS key is ignored when configure queue region"); + RTE_FLOW_ERROR_TYPE_ITEM, &pattern[0], + "VLAN pattern spec and mask required"); + } + /* for mask, VLAN/TCI must be masked appropriately */ + if ((rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 0x7) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, &pattern[0], + "VLAN pattern mask invalid"); + } /* Use a 64 bit variable to represent all queues in a region. */ RTE_BUILD_BUG_ON(I40E_MAX_Q_PER_TC > 64); - if (!rss_act->queue_num || - !rte_is_power_of_2(rss_act->queue_num) || - rss_act->queue_num + rss_act->queue[0] > I40E_MAX_Q_PER_TC) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Queue number error"); - - for (i = 1; i < rss_act->queue_num; i++) { - if (rss_act->queue[i - 1] + 1 != rss_act->queue[i]) - break; - } - - if (i < rss_act->queue_num) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "Queues must be incremented continuously"); - - /* Map all queues to bits of uint64_t */ - hash_queues = (BIT_ULL(rss_act->queue[0] + rss_act->queue_num) - 1) & - ~(BIT_ULL(rss_act->queue[0]) - 1); - - pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - if (hash_queues & ~pf->hash_enabled_queues) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Some queues are not in LUT"); - rss_conf->region_queue_num = (uint8_t)rss_act->queue_num; rss_conf->region_queue_start = rss_act->queue[0]; rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13; return 0; } -static int -i40e_hash_parse_global_conf(const struct rte_eth_dev *dev, - const struct rte_flow_item pattern[], - const struct rte_flow_action_rss *rss_act, - struct i40e_rte_flow_rss_conf *rss_conf, - struct rte_flow_error *error) -{ - if (rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "Symmetric function should be set with pattern types"); - - rss_conf->conf.func = rss_act->func; - - if (rss_act->types) - PMD_DRV_LOG(WARNING, - "RSS types are ignored when no pattern specified"); - - if (pattern[0].type == RTE_FLOW_ITEM_TYPE_VLAN) - return i40e_hash_parse_queue_region(dev, pattern, rss_act, - rss_conf, error); - - if (rss_act->queue) - return i40e_hash_parse_queues(dev, rss_act, rss_conf, error); - - if (rss_act->key_len) { - i40e_hash_parse_key(rss_act, rss_conf); - return 0; - } - - if (rss_act->func == RTE_ETH_HASH_FUNCTION_DEFAULT) - PMD_DRV_LOG(WARNING, "Nothing change"); - return 0; -} - static bool i40e_hash_validate_rss_types(uint64_t rss_types) { @@ -1124,83 +1030,262 @@ i40e_hash_validate_rss_types(uint64_t rss_types) } static int -i40e_hash_parse_pattern_act(const struct rte_eth_dev *dev, - const struct rte_flow_item pattern[], - const struct rte_flow_action_rss *rss_act, - struct i40e_rte_flow_rss_conf *rss_conf, - struct rte_flow_error *error) +i40e_hash_validate_rss_pattern(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) { - if (rss_act->queue) + const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf; + + /* queue list is not supported */ + if (rss_act->queue_num == 0) { return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "RSS Queues not supported when pattern specified"); - rss_conf->symmetric_enable = false; /* by default, symmetric is disabled */ + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queues not supported when pattern specified"); + } + /* disallow unsupported hash functions */ switch (rss_act->func) { case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ: - rss_conf->symmetric_enable = true; - break; case RTE_ETH_HASH_FUNCTION_DEFAULT: case RTE_ETH_HASH_FUNCTION_TOEPLITZ: case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR: break; default: return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "RSS hash function not supported " - "when pattern specified"); + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS hash function not supported when pattern specified"); } if (!i40e_hash_validate_rss_types(rss_act->types)) return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "RSS types are invalid"); + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss_act, "RSS types are invalid"); - if (rss_act->key_len) - i40e_hash_parse_key(rss_act, rss_conf); + /* check RSS key length if it is specified */ + if (rss_act->key_len != 0 && rss_act->key_len != I40E_RSS_KEY_LEN) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key length must be 52 bytes"); + } - rss_conf->conf.func = rss_act->func; - rss_conf->conf.types = rss_act->types; - rss_conf->inset = i40e_hash_get_inset(rss_act->types, rss_conf->symmetric_enable); + return 0; +} - return i40e_hash_get_pattern_pctypes(dev, pattern, rss_act, - rss_conf, error); +static int +i40e_hash_validate_rss_common(const struct rte_flow_action_rss *rss_act, + struct rte_flow_error *error) +{ + /* for empty patterns, symmetric toeplitz is not supported */ + if (rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Symmetric hash function not supported without specific patterns"); + } + + /* hash types are not supported for global RSS configuration */ + if (rss_act->types != 0) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS types not supported without a pattern"); + } + + /* check RSS key length if it is specified */ + if (rss_act->key_len != 0 && rss_act->key_len != I40E_RSS_KEY_LEN) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key length must be 52 bytes"); + } + + return 0; +} + +static int +i40e_hash_validate_queue_region(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf; + struct rte_eth_dev *dev = param->driver_ctx; + const struct i40e_pf *pf; + uint64_t hash_queues; + + if (i40e_hash_validate_rss_common(rss_act, error)) + return -rte_errno; + + RTE_BUILD_BUG_ON(sizeof(hash_queues) != sizeof(pf->hash_enabled_queues)); + + /* having RSS key is not supported */ + if (rss_act->key != NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key not supported"); + } + + /* queue region must be specified */ + if (rss_act->queue_num == 0) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queues missing"); + } + + /* queue region must be power of two */ + if (!rte_is_power_of_2(rss_act->queue_num)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queue number must be power of two"); + } + + /* generic checks already filtered out discontiguous/non-unique RSS queues */ + + /* queues must not exceed maximum queues per traffic class */ + if (rss_act->queue[rss_act->queue_num - 1] >= I40E_MAX_Q_PER_TC) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS queue index"); + } + + /* queues must be in LUT */ + pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + hash_queues = (BIT_ULL(rss_act->queue[0] + rss_act->queue_num) - 1) & + ~(BIT_ULL(rss_act->queue[0]) - 1); + + if (hash_queues & ~pf->hash_enabled_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss_act, "Some queues are not in LUT"); + } + + return 0; +} + +static int +i40e_hash_validate_queue_list(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf; + struct rte_eth_dev *dev = param->driver_ctx; + struct i40e_pf *pf; + struct i40e_hw *hw; + uint16_t max_queue; + bool has_queue, has_key; + + if (i40e_hash_validate_rss_common(rss_act, error)) + return -rte_errno; + + has_queue = rss_act->queue != NULL; + has_key = rss_act->key != NULL; + + /* if we have queues, we must not have key */ + if (has_queue && has_key) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key for queue region is not supported"); + } + + /* if there are no queues, no further checks needed */ + if (!has_queue) + return 0; + + /* check queue number limits */ + hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + if (rss_act->queue_num > hw->func_caps.rss_table_size) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss_act, "Too many RSS queues"); + } + + pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) + max_queue = i40e_pf_calc_configured_queues_num(pf); + else + max_queue = pf->dev_data->nb_rx_queues; + + max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC); + + /* we know RSS queues are contiguous so we only need to check last queue */ + if (rss_act->queue[rss_act->queue_num - 1] >= max_queue) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS queue"); + } + + return 0; } int -i40e_hash_parse(const struct rte_eth_dev *dev, +i40e_hash_parse(struct rte_eth_dev *dev, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct i40e_rte_flow_rss_conf *rss_conf, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .driver_ctx = dev + /* each pattern type will add specific check function */ + }; const struct rte_flow_action_rss *rss_act; + int ret; - if (actions[1].type != RTE_FLOW_ACTION_TYPE_END) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - &actions[1], - "Only support one action for RSS."); - - rss_act = (const struct rte_flow_action_rss *)actions[0].conf; - if (rss_act->level) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - actions, - "RSS level is not supported"); + /* + * We have two possible paths: global RSS configuration, and an RSS pattern action. + * + * For global patterns, we act on two types of flows: + * - Empty pattern ([END]) + * - VLAN pattern ([VLAN] -> [END]) + * + * Everything else is handled by pattern action parser. + */ + bool is_empty, is_vlan; while (pattern->type == RTE_FLOW_ITEM_TYPE_VOID) pattern++; - if (pattern[0].type == RTE_FLOW_ITEM_TYPE_END || - pattern[0].type == RTE_FLOW_ITEM_TYPE_VLAN) - return i40e_hash_parse_global_conf(dev, pattern, rss_act, - rss_conf, error); + is_empty = pattern[0].type == RTE_FLOW_ITEM_TYPE_END; + is_vlan = pattern[0].type == RTE_FLOW_ITEM_TYPE_VLAN && + pattern[1].type == RTE_FLOW_ITEM_TYPE_END; - return i40e_hash_parse_pattern_act(dev, pattern, rss_act, - rss_conf, error); + /* VLAN path */ + if (is_vlan) { + ac_param.check = i40e_hash_validate_queue_region; + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + rss_act = parsed_actions.actions[0]->conf; + /* set up RSS functions */ + rss_conf->conf.func = rss_act->func; + return i40e_hash_parse_queue_region(pattern, rss_act, rss_conf, error); + } + /* Empty pattern path */ + if (is_empty) { + ac_param.check = i40e_hash_validate_queue_list; + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + rss_act = parsed_actions.actions[0]->conf; + rss_conf->conf.func = rss_act->func; + /* if there is a queue list, take that path */ + if (rss_act->queue != NULL) + return i40e_hash_parse_queues(rss_act, rss_conf); + /* otherwise just parse RSS key */ + if (rss_act->key != NULL) + i40e_hash_parse_key(rss_act, rss_conf); + return 0; + } + ac_param.check = i40e_hash_validate_rss_pattern; + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + rss_act = parsed_actions.actions[0]->conf; + + /* pattern case */ + return i40e_hash_parse_pattern_act(dev, pattern, rss_act, rss_conf, error); } static void diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h index 2513d84565..99df4bccd0 100644 --- a/drivers/net/intel/i40e/i40e_hash.h +++ b/drivers/net/intel/i40e/i40e_hash.h @@ -13,7 +13,7 @@ extern "C" { #endif -int i40e_hash_parse(const struct rte_eth_dev *dev, +int i40e_hash_parse(struct rte_eth_dev *dev, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct i40e_rte_flow_rss_conf *rss_conf, -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 12/25] net/i40e: use common action checks for ethertype 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (10 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 11/25] net/i40e: refactor RSS flow parameter checks Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 13/25] net/i40e: use common action checks for FDIR Anatoly Burakov ` (13 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson Use the common flow action checking parsing infrastructure for checking flow actions for ethertype filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 56 +++++++++++++----------------- 1 file changed, 24 insertions(+), 32 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 191bbd41a4..2454d3e5ca 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -1348,43 +1348,35 @@ i40e_flow_parse_ethertype_action(struct rte_eth_dev *dev, struct rte_flow_error *error, struct rte_eth_ethertype_filter *filter) { - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - const struct rte_flow_action *act; - const struct rte_flow_action_queue *act_q; - uint32_t index = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END, + }, + .max_actions = 1, + }; + const struct rte_flow_action *action; + int ret; - /* Check if the first non-void action is QUEUE or DROP. */ - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE && - act->type != RTE_FLOW_ACTION_TYPE_DROP) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + action = parsed_actions.actions[0]; - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = act->conf; + if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *act_q = action->conf; + /* check queue index */ + if (act_q->index >= dev->data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid queue index"); + } filter->queue = act_q->index; - if (filter->queue >= pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid queue ID for" - " ethertype_filter."); - return -rte_errno; - } - } else { + } else if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { filter->flags |= RTE_ETHTYPE_FLAGS_DROP; } - - /* Check if the next non-void item is END */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 13/25] net/i40e: use common action checks for FDIR 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (11 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 12/25] net/i40e: use common action checks for ethertype Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 14/25] net/i40e: use common action checks for tunnel Anatoly Burakov ` (12 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson Use the common flow action checking parsing infrastructure for checking flow actions for flow director filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 139 ++++++++++++++++------------- 1 file changed, 76 insertions(+), 63 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 2454d3e5ca..3470f35a20 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -2396,28 +2396,49 @@ i40e_flow_parse_fdir_action(struct rte_eth_dev *dev, struct i40e_fdir_filter_conf *filter) { struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - const struct rte_flow_action *act; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark_spec = NULL; - uint32_t index = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_PASSTHRU, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_FLAG, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + }; + const struct rte_flow_action *first, *second; + int ret; - /* Check if the first non-void action is QUEUE or DROP or PASSTHRU. */ - NEXT_ITEM_OF_ACTION(act, actions, index); - switch (act->type) { + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + first = parsed_actions.actions[0]; + /* can be NULL */ + second = parsed_actions.actions[1]; + + switch (first->type) { case RTE_FLOW_ACTION_TYPE_QUEUE: - act_q = act->conf; + { + const struct rte_flow_action_queue *act_q = first->conf; + /* check against PF constraints */ + if (!filter->input.flow_ext.is_vf && act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid queue ID for FDIR"); + } + /* check against VF constraints */ + if (filter->input.flow_ext.is_vf && act_q->index >= pf->vf_nb_qps) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid queue ID for FDIR"); + } filter->action.rx_queue = act_q->index; - if ((!filter->input.flow_ext.is_vf && - filter->action.rx_queue >= pf->dev_data->nb_rx_queues) || - (filter->input.flow_ext.is_vf && - filter->action.rx_queue >= pf->vf_nb_qps)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid queue ID for FDIR."); - return -rte_errno; - } filter->action.behavior = I40E_FDIR_ACCEPT; break; + } case RTE_FLOW_ACTION_TYPE_DROP: filter->action.behavior = I40E_FDIR_REJECT; break; @@ -2425,69 +2446,61 @@ i40e_flow_parse_fdir_action(struct rte_eth_dev *dev, filter->action.behavior = I40E_FDIR_PASSTHRU; break; case RTE_FLOW_ACTION_TYPE_MARK: + { + const struct rte_flow_action_mark *act_m = first->conf; filter->action.behavior = I40E_FDIR_PASSTHRU; - mark_spec = act->conf; filter->action.report_status = I40E_FDIR_REPORT_ID; - filter->soft_id = mark_spec->id; - break; + filter->soft_id = act_m->id; + break; + } default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid first action for FDIR"); } - /* Check if the next non-void item is MARK or FLAG or END. */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - switch (act->type) { + /* do we have another? */ + if (second == NULL) + return 0; + + switch (second->type) { case RTE_FLOW_ACTION_TYPE_MARK: - if (mark_spec) { - /* Double MARK actions requested */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + { + const struct rte_flow_action_mark *act_m = second->conf; + /* only one mark action can be specified */ + if (first->type == RTE_FLOW_ACTION_TYPE_MARK) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } - mark_spec = act->conf; filter->action.report_status = I40E_FDIR_REPORT_ID; - filter->soft_id = mark_spec->id; + filter->soft_id = act_m->id; break; + } case RTE_FLOW_ACTION_TYPE_FLAG: - if (mark_spec) { - /* MARK + FLAG not supported */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + { + /* mark + flag is unsupported */ + if (first->type == RTE_FLOW_ACTION_TYPE_MARK) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } filter->action.report_status = I40E_FDIR_NO_REPORT_STATUS; break; + } case RTE_FLOW_ACTION_TYPE_RSS: - if (filter->action.behavior != I40E_FDIR_PASSTHRU) { - /* RSS filter won't be next if FDIR did not pass thru */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + /* RSS filter only can be after passthru or mark */ + if (first->type != RTE_FLOW_ACTION_TYPE_PASSTHRU && + first->type != RTE_FLOW_ACTION_TYPE_MARK) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } break; - case RTE_FLOW_ACTION_TYPE_END: - return 0; default: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid action."); - return -rte_errno; - } - - /* Check if the next non-void item is END */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid action."); - return -rte_errno; + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } return 0; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 14/25] net/i40e: use common action checks for tunnel 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (12 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 13/25] net/i40e: use common action checks for FDIR Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 15/25] net/iavf: use common flow attribute checks Anatoly Burakov ` (11 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson Use the common flow action checking parsing infrastructure for checking flow actions for flow director tunnel filter. As a result, more stringent checks are performed against parametersm specifically the following: - reject NULL conf for VF action (instead of attempt at NULL dereference) - second action was expected to be QUEUE but if it wasn't it was ignored Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 104 +++++++++++++---------------- 1 file changed, 48 insertions(+), 56 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 3470f35a20..88cdc59429 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -1104,15 +1104,6 @@ static struct i40e_valid_pattern i40e_supported_patterns[] = { { pattern_fdir_ipv6_sctp, i40e_flow_parse_l4_cloud_filter }, }; -#define NEXT_ITEM_OF_ACTION(act, actions, index) \ - do { \ - act = actions + index; \ - while (act->type == RTE_FLOW_ACTION_TYPE_VOID) { \ - index++; \ - act = actions + index; \ - } \ - } while (0) - /* Find the first VOID or non-VOID item pointer */ static const struct rte_flow_item * i40e_find_first_item(const struct rte_flow_item *item, bool is_void) @@ -2539,61 +2530,62 @@ i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev, struct i40e_tunnel_filter_conf *filter) { struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - const struct rte_flow_action *act; const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_vf *act_vf; - uint32_t index = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_PF, + RTE_FLOW_ACTION_TYPE_VF, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + }; + const struct rte_flow_action *first, *second; + int ret; - /* Check if the first non-void action is PF or VF. */ - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_PF && - act->type != RTE_FLOW_ACTION_TYPE_VF) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + first = parsed_actions.actions[0]; + /* can be NULL */ + second = parsed_actions.actions[1]; - if (act->type == RTE_FLOW_ACTION_TYPE_VF) { - act_vf = act->conf; - filter->vf_id = act_vf->id; + /* first action must be PF or VF */ + if (first->type == RTE_FLOW_ACTION_TYPE_VF) { + const struct rte_flow_action_vf *vf = first->conf; + if (vf->id >= pf->vf_num) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid VF ID for tunnel filter"); + return -rte_errno; + } + filter->vf_id = vf->id; filter->is_to_vf = 1; - if (filter->vf_id >= pf->vf_num) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid VF ID for tunnel filter"); - return -rte_errno; - } + } else if (first->type != RTE_FLOW_ACTION_TYPE_PF) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Unsupported action"); } - /* Check if the next non-void item is QUEUE */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = act->conf; - filter->queue_id = act_q->index; - if ((!filter->is_to_vf) && - (filter->queue_id >= pf->dev_data->nb_rx_queues)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid queue ID for tunnel filter"); - return -rte_errno; - } else if (filter->is_to_vf && - (filter->queue_id >= pf->vf_nb_qps)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid queue ID for tunnel filter"); - return -rte_errno; - } - } + /* check if second action is QUEUE */ + if (second == NULL) + return 0; - /* Check if the next non-void item is END */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; + act_q = second->conf; + /* check queue ID for PF flow */ + if (!filter->is_to_vf && act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q, + "Invalid queue ID for tunnel filter"); + } + /* check queue ID for VF flow */ + if (filter->is_to_vf && act_q->index >= pf->vf_nb_qps) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q, + "Invalid queue ID for tunnel filter"); } + filter->queue_id = act_q->index; return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 15/25] net/iavf: use common flow attribute checks 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (13 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 14/25] net/i40e: use common action checks for tunnel Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 16/25] net/iavf: use common action checks for IPsec Anatoly Burakov ` (10 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Replace custom attr checks with a call to common checks. Flow subscription engine supports priority but other engines don't, so we move the attribute checks into the engines. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_fdir.c | 8 ++- drivers/net/intel/iavf/iavf_fsub.c | 20 ++++++- drivers/net/intel/iavf/iavf_generic_flow.c | 67 +++------------------- drivers/net/intel/iavf/iavf_generic_flow.h | 2 +- drivers/net/intel/iavf/iavf_hash.c | 10 ++-- drivers/net/intel/iavf/iavf_ipsec_crypto.c | 8 ++- 6 files changed, 43 insertions(+), 72 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c index 9eae874800..7dce5086cf 100644 --- a/drivers/net/intel/iavf/iavf_fdir.c +++ b/drivers/net/intel/iavf/iavf_fdir.c @@ -17,6 +17,7 @@ #include "iavf.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #include "virtchnl.h" #include "iavf_rxtx.h" @@ -1592,7 +1593,7 @@ iavf_fdir_parse(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1603,8 +1604,9 @@ iavf_fdir_parse(struct iavf_adapter *ad, memset(filter, 0, sizeof(*filter)); - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (!item) diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c index bfb34695de..010c1d5a44 100644 --- a/drivers/net/intel/iavf/iavf_fsub.c +++ b/drivers/net/intel/iavf/iavf_fsub.c @@ -20,6 +20,7 @@ #include <rte_flow.h> #include <iavf.h> #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #define MAX_QGRP_NUM_TYPE 7 #define IAVF_IPV6_ADDR_LENGTH 16 @@ -725,12 +726,15 @@ iavf_fsub_parse(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { struct iavf_fsub_conf *filter; struct iavf_pattern_match_item *pattern_match_item = NULL; + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; int ret = 0; filter = rte_zmalloc(NULL, sizeof(*filter), 0); @@ -741,6 +745,18 @@ iavf_fsub_parse(struct iavf_adapter *ad, return -ENOMEM; } + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + goto error; + + if (attr->priority > 1) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Only support priority 0 and 1."); + ret = -rte_errno; + goto error; + } + /* search flow subscribe pattern */ pattern_match_item = iavf_search_pattern_match_item(pattern, array, array_len, error); @@ -762,7 +778,7 @@ iavf_fsub_parse(struct iavf_adapter *ad, goto error; /* parse flow subscribe pattern action */ - ret = iavf_fsub_parse_action((void *)ad, actions, priority, + ret = iavf_fsub_parse_action((void *)ad, actions, attr->priority, error, filter); error: diff --git a/drivers/net/intel/iavf/iavf_generic_flow.c b/drivers/net/intel/iavf/iavf_generic_flow.c index 42ecc90d1d..b8f6414b16 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.c +++ b/drivers/net/intel/iavf/iavf_generic_flow.c @@ -1785,7 +1785,7 @@ enum rte_flow_item_type iavf_pattern_eth_ipv6_udp_l2tpv2_ppp_ipv6_tcp[] = { typedef struct iavf_flow_engine * (*parse_engine_t)(struct iavf_adapter *ad, struct rte_flow *flow, struct iavf_parser_list *parser_list, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); @@ -1939,45 +1939,6 @@ iavf_unregister_parser(struct iavf_flow_parser *parser, } } -static int -iavf_flow_valid_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error) -{ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* support priority for flow subscribe */ - if (attr->priority > 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Only support priority 0 and 1."); - return -rte_errno; - } - - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "Not support group."); - return -rte_errno; - } - - return 0; -} - /* Find the first VOID or non-VOID item pointer */ static const struct rte_flow_item * iavf_find_first_item(const struct rte_flow_item *item, bool is_void) @@ -2106,7 +2067,7 @@ static struct iavf_flow_engine * iavf_parse_engine_create(struct iavf_adapter *ad, struct rte_flow *flow, struct iavf_parser_list *parser_list, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2120,7 +2081,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad, if (parser_node->parser->parse_pattern_action(ad, parser_node->parser->array, parser_node->parser->array_len, - pattern, actions, priority, &meta, error) < 0) + pattern, actions, attr, &meta, error) < 0) continue; engine = parser_node->parser->engine; @@ -2136,7 +2097,7 @@ static struct iavf_flow_engine * iavf_parse_engine_validate(struct iavf_adapter *ad, struct rte_flow *flow, struct iavf_parser_list *parser_list, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2150,7 +2111,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad, if (parser_node->parser->parse_pattern_action(ad, parser_node->parser->array, parser_node->parser->array_len, - pattern, actions, priority, &meta, error) < 0) + pattern, actions, attr, &meta, error) < 0) continue; engine = parser_node->parser->engine; @@ -2182,7 +2143,6 @@ iavf_flow_process_filter(struct rte_eth_dev *dev, parse_engine_t iavf_parse_engine, struct rte_flow_error *error) { - int ret = IAVF_ERR_CONFIG; struct iavf_adapter *ad = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); @@ -2200,29 +2160,18 @@ iavf_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } - - ret = iavf_flow_valid_attr(attr, error); - if (ret) - return ret; - *engine = iavf_parse_engine(ad, flow, &vf->rss_parser_list, - attr->priority, pattern, actions, error); + attr, pattern, actions, error); if (*engine) return 0; *engine = iavf_parse_engine(ad, flow, &vf->dist_parser_list, - attr->priority, pattern, actions, error); + attr, pattern, actions, error); if (*engine) return 0; *engine = iavf_parse_engine(ad, flow, &vf->ipsec_crypto_parser_list, - attr->priority, pattern, actions, error); + attr, pattern, actions, error); if (*engine) return 0; diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h index b97cf8b7ff..ddc554996d 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.h +++ b/drivers/net/intel/iavf/iavf_generic_flow.h @@ -471,7 +471,7 @@ typedef int (*parse_pattern_action_t)(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c index cb10eeab78..3607d6d680 100644 --- a/drivers/net/intel/iavf/iavf_hash.c +++ b/drivers/net/intel/iavf/iavf_hash.c @@ -22,6 +22,7 @@ #include "iavf_log.h" #include "iavf.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #define IAVF_PHINT_NONE 0 #define IAVF_PHINT_GTPU BIT_ULL(0) @@ -86,7 +87,7 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); @@ -1519,7 +1520,7 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1528,8 +1529,9 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, uint64_t phint = IAVF_PHINT_NONE; int ret = 0; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c index 47102e75f2..fd35997cbd 100644 --- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c +++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c @@ -14,6 +14,7 @@ #include "iavf_rxtx.h" #include "iavf_log.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #include "iavf_ipsec_crypto.h" #include "iavf_ipsec_crypto_capabilities.h" @@ -1951,15 +1952,16 @@ iavf_ipsec_flow_parse(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { struct iavf_pattern_match_item *item = NULL; int ret = -1; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (item && item->meta) { -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 16/25] net/iavf: use common action checks for IPsec 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (14 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 15/25] net/iavf: use common flow attribute checks Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 17/25] net/iavf: use common action checks for hash Anatoly Burakov ` (9 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for IPsec filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_ipsec_crypto.c | 34 +++++++++------------- 1 file changed, 14 insertions(+), 20 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c index fd35997cbd..6466d84cfb 100644 --- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c +++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c @@ -1745,26 +1745,12 @@ parse_udp_item(const struct rte_flow_item_udp *item, struct rte_udp_hdr *udp) udp->src_port = item->hdr.src_port; } -static int -has_security_action(const struct rte_flow_action actions[], - const void **session) -{ - /* only {SECURITY; END} supported */ - if (actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY && - actions[1].type == RTE_FLOW_ACTION_TYPE_END) { - *session = actions[0].conf; - return true; - } - return false; -} - static struct iavf_ipsec_flow_item * iavf_ipsec_flow_item_parse(struct rte_eth_dev *ethdev, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct rte_security_session *session, uint32_t type) { - const void *session; struct iavf_ipsec_flow_item *ipsec_flow = rte_malloc("security-flow-rule", sizeof(struct iavf_ipsec_flow_item), 0); @@ -1831,9 +1817,6 @@ iavf_ipsec_flow_item_parse(struct rte_eth_dev *ethdev, goto flow_cleanup; } - if (!has_security_action(actions, &session)) - goto flow_cleanup; - if (!iavf_ipsec_crypto_action_valid(ethdev, session, ipsec_flow->spi)) goto flow_cleanup; @@ -1956,6 +1939,14 @@ iavf_ipsec_flow_parse(struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_SECURITY, + RTE_FLOW_ACTION_TYPE_END, + }, + .max_actions = 1, + }; struct iavf_pattern_match_item *item = NULL; int ret = -1; @@ -1963,12 +1954,15 @@ iavf_ipsec_flow_parse(struct iavf_adapter *ad, if (ret) return ret; + if (ci_flow_check_actions(actions, ¶m, &parsed_actions, error) < 0) + return ret; + item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (item && item->meta) { + const struct rte_security_session *session = parsed_actions.actions[0]->conf; uint32_t type = (uint64_t)(item->meta); struct iavf_ipsec_flow_item *fi = - iavf_ipsec_flow_item_parse(ad->vf.eth_dev, - pattern, actions, type); + iavf_ipsec_flow_item_parse(ad->vf.eth_dev, pattern, session, type); if (fi && meta) { *meta = fi; ret = 0; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 17/25] net/iavf: use common action checks for hash 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (15 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 16/25] net/iavf: use common action checks for IPsec Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 18/25] net/iavf: use common action checks for FDIR Anatoly Burakov ` (8 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for hash filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_hash.c | 143 +++++++++++++++-------------- 1 file changed, 72 insertions(+), 71 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c index 3607d6d680..75dde35764 100644 --- a/drivers/net/intel/iavf/iavf_hash.c +++ b/drivers/net/intel/iavf/iavf_hash.c @@ -1427,95 +1427,81 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func, } static int -iavf_hash_parse_action(struct iavf_pattern_match_item *match_item, - const struct rte_flow_action actions[], - uint64_t pattern_hint, struct iavf_rss_meta *rss_meta, - struct rte_flow_error *error) +iavf_hash_parse_rss_type(struct iavf_pattern_match_item *match_item, + const struct rte_flow_action_rss *rss, + uint64_t pattern_hint, struct iavf_rss_meta *rss_meta, + struct rte_flow_error *error) { - enum rte_flow_action_type action_type; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action; uint64_t rss_type; - /* Supported action is RSS. */ - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - rss = action->conf; - rss_type = rss->types; + rss_meta->rss_algorithm = rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ ? + VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC : + VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC; - if (rss->func == - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR){ - rss_meta->rss_algorithm = - VIRTCHNL_RSS_ALG_XOR_ASYMMETRIC; - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "function simple_xor is not supported"); - } else if (rss->func == - RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { - rss_meta->rss_algorithm = - VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC; - } else { - rss_meta->rss_algorithm = - VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC; - } + /* If pattern type is raw, no need to refine rss type */ + if (pattern_hint == IAVF_PHINT_RAW) + return 0; - if (rss->level) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS encapsulation level is not supported"); + /** + * Check simultaneous use of SRC_ONLY and DST_ONLY + * of the same level. + */ + rss_type = rte_eth_rss_hf_refine(rss->types); - if (rss->key_len) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS key_len is not supported"); + if (iavf_any_invalid_rss_type(rss->func, rss_type, match_item->input_set_mask)) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS type not supported"); + } - if (rss->queue_num) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a non-NULL RSS queue is not supported"); + memcpy(&rss_meta->proto_hdrs, match_item->meta, sizeof(struct virtchnl_proto_hdrs)); - /* If pattern type is raw, no need to refine rss type */ - if (pattern_hint == IAVF_PHINT_RAW) - break; + iavf_refine_proto_hdrs(&rss_meta->proto_hdrs, rss_type, pattern_hint); - /** - * Check simultaneous use of SRC_ONLY and DST_ONLY - * of the same level. - */ - rss_type = rte_eth_rss_hf_refine(rss_type); + return 0; +} - if (iavf_any_invalid_rss_type(rss->func, rss_type, - match_item->input_set_mask)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - action, "RSS type not supported"); +static int +iavf_hash_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss = actions->actions[0]->conf; - memcpy(&rss_meta->proto_hdrs, match_item->meta, - sizeof(struct virtchnl_proto_hdrs)); + /* filter out unsupported RSS functions */ + switch (rss->func) { + case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR: + case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Selected RSS hash function not supported"); + default: + break; + } - iavf_refine_proto_hdrs(&rss_meta->proto_hdrs, - rss_type, pattern_hint); - break; + if (rss->level != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Nonzero RSS encapsulation level is not supported"); + } - case RTE_FLOW_ACTION_TYPE_END: - break; + if (rss->key_len != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS key is not supported"); + } - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "Invalid action."); - return -rte_errno; - } + if (rss->queue_num != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS queue region is not supported"); } return 0; } static int -iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, +iavf_hash_parse_pattern_action(struct iavf_adapter *ad, struct iavf_pattern_match_item *array, uint32_t array_len, const struct rte_flow_item pattern[], @@ -1524,6 +1510,17 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .driver_ctx = ad, + .check = iavf_hash_parse_action_check, + }; + const struct rte_flow_action_rss *rss; struct iavf_pattern_match_item *pattern_match_item; struct iavf_rss_meta *rss_meta_ptr; uint64_t phint = IAVF_PHINT_NONE; @@ -1533,6 +1530,10 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, if (ret) return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { rte_flow_error_set(error, EINVAL, @@ -1565,8 +1566,8 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, } } - ret = iavf_hash_parse_action(pattern_match_item, actions, phint, - rss_meta_ptr, error); + rss = parsed_actions.actions[0]->conf; + ret = iavf_hash_parse_rss_type(pattern_match_item, rss, phint, rss_meta_ptr, error); error: if (!ret && meta) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 18/25] net/iavf: use common action checks for FDIR 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (16 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 17/25] net/iavf: use common action checks for hash Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 19/25] net/iavf: use common action checks for fsub Anatoly Burakov ` (7 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for FDIR filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_fdir.c | 359 +++++++++++++---------------- 1 file changed, 157 insertions(+), 202 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c index 7dce5086cf..1f17f8fa24 100644 --- a/drivers/net/intel/iavf/iavf_fdir.c +++ b/drivers/net/intel/iavf/iavf_fdir.c @@ -441,204 +441,6 @@ static struct iavf_flow_engine iavf_fdir_engine = { .type = IAVF_FLOW_ENGINE_FDIR, }; -static int -iavf_fdir_parse_action_qregion(struct iavf_adapter *ad, - struct rte_flow_error *error, - const struct rte_flow_action *act, - struct virtchnl_filter_action *filter_action) -{ - struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); - const struct rte_flow_action_rss *rss = act->conf; - uint32_t i; - - if (act->type != RTE_FLOW_ACTION_TYPE_RSS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; - } - - if (rss->queue_num <= 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Queue region size can't be 0 or 1."); - return -rte_errno; - } - - /* check if queue index for queue region is continuous */ - for (i = 0; i < rss->queue_num - 1; i++) { - if (rss->queue[i + 1] != rss->queue[i] + 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Discontinuous queue region"); - return -rte_errno; - } - } - - if (rss->queue[rss->queue_num - 1] >= ad->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid queue region indexes."); - return -rte_errno; - } - - if (!(rte_is_power_of_2(rss->queue_num) && - rss->queue_num <= IAVF_FDIR_MAX_QREGION_SIZE)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "The region size should be any of the following values:" - "1, 2, 4, 8, 16, 32, 64, 128 as long as the total number " - "of queues do not exceed the VSI allocation."); - return -rte_errno; - } - - if (rss->queue_num > vf->max_rss_qregion) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "The region size cannot be large than the supported max RSS queue region"); - return -rte_errno; - } - - filter_action->act_conf.queue.index = rss->queue[0]; - filter_action->act_conf.queue.region = rte_fls_u32(rss->queue_num) - 1; - - return 0; -} - -static int -iavf_fdir_parse_action(struct iavf_adapter *ad, - const struct rte_flow_action actions[], - struct rte_flow_error *error, - struct iavf_fdir_conf *filter) -{ - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark_spec = NULL; - uint32_t dest_num = 0; - uint32_t mark_num = 0; - int ret; - - int number = 0; - struct virtchnl_filter_action *filter_action; - - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - - case RTE_FLOW_ACTION_TYPE_PASSTHRU: - dest_num++; - - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_PASSTHRU; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_DROP: - dest_num++; - - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_DROP; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - - act_q = actions->conf; - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_QUEUE; - filter_action->act_conf.queue.index = act_q->index; - - if (filter_action->act_conf.queue.index >= - ad->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid queue for FDIR."); - return -rte_errno; - } - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_RSS: - dest_num++; - - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_Q_REGION; - - ret = iavf_fdir_parse_action_qregion(ad, - error, actions, filter_action); - if (ret) - return ret; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_MARK: - mark_num++; - - filter->mark_flag = 1; - mark_spec = actions->conf; - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_MARK; - filter_action->act_conf.mark_id = mark_spec->id; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action."); - return -rte_errno; - } - } - - if (number > VIRTCHNL_MAX_NUM_ACTIONS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Action numbers exceed the maximum value"); - return -rte_errno; - } - - if (dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - - if (mark_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Too many mark actions"); - return -rte_errno; - } - - if (dest_num + mark_num == 0) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Empty action"); - return -rte_errno; - } - - /* Mark only is equal to mark + passthru. */ - if (dest_num == 0) { - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - filter_action->type = VIRTCHNL_ACTION_PASSTHRU; - filter->add_fltr.rule_cfg.action_set.count = ++number; - } - - return 0; -} - static bool iavf_fdir_refine_input_set(const uint64_t input_set, const uint64_t input_set_mask, @@ -1587,6 +1389,145 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, return 0; } +static int +iavf_fdir_action_check_qregion(struct iavf_adapter *ad, + const struct rte_flow_action_rss *rss, + struct rte_flow_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); + + if (rss->queue_num <= 1) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Queue region size can't be 0 or 1."); + } + + if (rss->queue[rss->queue_num - 1] >= ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Invalid queue region indexes."); + } + + if (!(rte_is_power_of_2(rss->queue_num) && + rss->queue_num <= IAVF_FDIR_MAX_QREGION_SIZE)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "The region size should be any of the following values:" + "1, 2, 4, 8, 16, 32, 64, 128 as long as the total number " + "of queues do not exceed the VSI allocation."); + } + + if (rss->queue_num > vf->max_rss_qregion) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "The region size cannot be large than the supported max RSS queue region"); + } + + return 0; +} + +static int +iavf_fdir_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct iavf_adapter *ad = param->driver_ctx; + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); + struct iavf_fdir_conf *filter = &vf->fdir.conf; + uint32_t dest_num = 0, mark_num = 0; + size_t i, number = 0; + bool has_drop = false; + int ret; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *act = actions->actions[i]; + struct virtchnl_filter_action *filter_action = + &filter->add_fltr.rule_cfg.action_set.actions[number]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_PASSTHRU: + dest_num++; + + filter_action->type = VIRTCHNL_ACTION_PASSTHRU; + break; + case RTE_FLOW_ACTION_TYPE_DROP: + dest_num++; + has_drop = true; + + filter_action->type = VIRTCHNL_ACTION_DROP; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q; + dest_num++; + + act_q = act->conf; + + filter_action->type = VIRTCHNL_ACTION_QUEUE; + + if (act_q->index >= ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Invalid queue index."); + } + filter_action->act_conf.queue.index = act_q->index; + + break; + } + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = act->conf; + dest_num++; + + filter_action->type = VIRTCHNL_ACTION_Q_REGION; + + ret = iavf_fdir_action_check_qregion(ad, rss, error); + if (ret) + return ret; + + filter_action->act_conf.queue.index = rss->queue[0]; + filter_action->act_conf.queue.region = rte_fls_u32(rss->queue_num) - 1; + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + const struct rte_flow_action_mark *mark_spec; + mark_num++; + + filter->mark_flag = 1; + mark_spec = act->conf; + + filter_action->type = VIRTCHNL_ACTION_MARK; + filter_action->act_conf.mark_id = mark_spec->id; + + break; + } + default: + /* cannot happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Invalid action."); + } + filter->add_fltr.rule_cfg.action_set.count = ++number; + } + + if (dest_num > 1 || mark_num > 1 || (has_drop && mark_num > 1)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Unsupported action combination"); + } + + /* Mark only is equal to mark + passthru. */ + if (dest_num == 0) { + struct virtchnl_filter_action *filter_action = + &filter->add_fltr.rule_cfg.action_set.actions[number]; + filter_action->type = VIRTCHNL_ACTION_PASSTHRU; + filter->add_fltr.rule_cfg.action_set.count = ++number; + } + + return 0; +} + static int iavf_fdir_parse(struct iavf_adapter *ad, struct iavf_pattern_match_item *array, @@ -1597,6 +1538,20 @@ iavf_fdir_parse(struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_PASSTHRU, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + .check = iavf_fdir_parse_action_check, + .driver_ctx = ad + }; struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); struct iavf_fdir_conf *filter = &vf->fdir.conf; struct iavf_pattern_match_item *item = NULL; @@ -1608,6 +1563,10 @@ iavf_fdir_parse(struct iavf_adapter *ad, if (ret) return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (!item) return -rte_errno; @@ -1617,10 +1576,6 @@ iavf_fdir_parse(struct iavf_adapter *ad, if (ret) goto error; - ret = iavf_fdir_parse_action(ad, actions, error, filter); - if (ret) - goto error; - if (meta) *meta = filter; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 19/25] net/iavf: use common action checks for fsub 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (17 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 18/25] net/iavf: use common action checks for FDIR Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 20/25] net/iavf: use common action checks for flow query Anatoly Burakov ` (6 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for flow subscription filter. Existing implementation had a couple issues that do not rise to the level of being bugs, but are still questionable design choices. For one, DROP action is supported in actions check (single actions are allowed as long as it isn't RSS or QUEUE) but is later disallowed in action parse (because not having port representor action is treated as an error). This is fixed by removing DROP action support from the check stage. For another, PORT_REPRESENTOR action was incrementing action counter but not writing anything into action array, which, given that action list is zero-initialized, meant that the default action (drop) was kept in the action list. However, because the actual PF treats drop as a no-op, nothing bad happened when a DROP action was added to the list of actions. However, nothing bad also happens if we just didn't have an action to begin with, so we remedy these unorthodox semantics by accordingly treating PORT_REPRESENTOR action as a noop, and not adding anything to the action list. As a final note, now that all filter parsing code paths use the common action check infrastructure, we can remove the NULL check for actions from the beginning of the parsing path, as this is now handled by each engine. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_fsub.c | 248 +++++++++------------ drivers/net/intel/iavf/iavf_generic_flow.c | 7 - 2 files changed, 105 insertions(+), 150 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c index 010c1d5a44..4131937cc7 100644 --- a/drivers/net/intel/iavf/iavf_fsub.c +++ b/drivers/net/intel/iavf/iavf_fsub.c @@ -540,89 +540,46 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[], } static int -iavf_fsub_parse_action(struct iavf_adapter *ad, - const struct rte_flow_action *actions, +iavf_fsub_parse_action(const struct ci_flow_actions *actions, uint32_t priority, struct rte_flow_error *error, struct iavf_fsub_conf *filter) { - const struct rte_flow_action *action; - const struct rte_flow_action_ethdev *act_ethdev; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_rss *act_qgrop; - struct virtchnl_filter_action *filter_action; - uint16_t valid_qgrop_number[MAX_QGRP_NUM_TYPE] = { - 2, 4, 8, 16, 32, 64, 128}; - uint16_t i, num = 0, dest_num = 0, vf_num = 0; - uint16_t rule_port_id; + uint16_t num_actions = 0; + size_t i; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *action = actions->actions[i]; + struct virtchnl_filter_action *filter_action = + &filter->sub_fltr.actions.actions[num_actions]; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { switch (action->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - vf_num++; - filter_action = &filter->sub_fltr.actions.actions[num]; - - act_ethdev = action->conf; - rule_port_id = ad->dev_data->port_id; - if (rule_port_id != act_ethdev->port_id) - goto error1; - - filter->sub_fltr.actions.count = ++num; - break; + /* nothing to be done, but skip the action */ + continue; case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - filter_action = &filter->sub_fltr.actions.actions[num]; - - act_q = action->conf; - if (act_q->index >= ad->dev_data->nb_rx_queues) - goto error2; - + { + const struct rte_flow_action_queue *act_q = action->conf; filter_action->type = VIRTCHNL_ACTION_QUEUE; filter_action->act_conf.queue.index = act_q->index; - filter->sub_fltr.actions.count = ++num; break; + } case RTE_FLOW_ACTION_TYPE_RSS: - dest_num++; - filter_action = &filter->sub_fltr.actions.actions[num]; - - act_qgrop = action->conf; - if (act_qgrop->queue_num <= 1) - goto error2; + { + const struct rte_flow_action_rss *act_qgrp = action->conf; filter_action->type = VIRTCHNL_ACTION_Q_REGION; - filter_action->act_conf.queue.index = - act_qgrop->queue[0]; - for (i = 0; i < MAX_QGRP_NUM_TYPE; i++) { - if (act_qgrop->queue_num == - valid_qgrop_number[i]) - break; - } - - if (i == MAX_QGRP_NUM_TYPE) - goto error2; - - if ((act_qgrop->queue[0] + act_qgrop->queue_num) > - ad->dev_data->nb_rx_queues) - goto error3; - - for (i = 0; i < act_qgrop->queue_num - 1; i++) - if (act_qgrop->queue[i + 1] != - act_qgrop->queue[i] + 1) - goto error4; - - filter_action->act_conf.queue.region = act_qgrop->queue_num; - filter->sub_fltr.actions.count = ++num; + filter_action->act_conf.queue.index = act_qgrp->queue[0]; + filter_action->act_conf.queue.region = act_qgrp->queue_num; break; + } default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid action type"); - return -rte_errno; + /* cannot happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type."); } + filter->sub_fltr.actions.count = ++num_actions; } /* 0 denotes lowest priority of recipe and highest priority @@ -630,91 +587,86 @@ iavf_fsub_parse_action(struct iavf_adapter *ad, */ filter->sub_fltr.priority = priority; - if (num > VIRTCHNL_MAX_NUM_ACTIONS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Action numbers exceed the maximum value"); - return -rte_errno; - } - - if (vf_num == 0) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action, vf action must be added"); - return -rte_errno; - } - - if (dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - return 0; - -error1: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid port id"); - return -rte_errno; - -error2: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action type or queue number"); - return -rte_errno; - -error3: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid queue region indexes"); - return -rte_errno; - -error4: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Discontinuous queue region"); - return -rte_errno; } static int -iavf_fsub_check_action(const struct rte_flow_action *actions, +iavf_fsub_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, struct rte_flow_error *error) { - const struct rte_flow_action *action; - enum rte_flow_action_type action_type; - uint16_t actions_num = 0; - bool vf_valid = false; - bool queue_valid = false; + const struct iavf_adapter *ad = param->driver_ctx; + bool vf = false; + bool queue = false; + size_t i; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { + /* + * allowed action types: + * 1. PORT_REPRESENTOR only + * 2. PORT_REPRESENTOR + QUEUE/RSS + */ + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *action = actions->actions[i]; + switch (action->type) { case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - vf_valid = true; - actions_num++; + { + const struct rte_flow_action_ethdev *act_ethdev = action->conf; + + if (act_ethdev->port_id != ad->dev_data->port_id) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_ethdev, + "Invalid port id"); + } + vf = true; break; + } case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *act_qgrp = action->conf; + + /* must be between 2 and 128 and be a power of 2 */ + if (act_qgrp->queue_num < 2 || act_qgrp->queue_num > 128 || + !rte_is_power_of_2(act_qgrp->queue_num)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_qgrp, + "Invalid number of queues in RSS queue group"); + } + /* last queue must not exceed total number of queues */ + if (act_qgrp->queue[0] + act_qgrp->queue_num > ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_qgrp, + "Invalid queue index in RSS queue group"); + } + + queue = true; + break; + } case RTE_FLOW_ACTION_TYPE_QUEUE: - queue_valid = true; - actions_num++; + { + const struct rte_flow_action_queue *act_q = action->conf; + + if (act_q->index >= ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q, + "Invalid queue index"); + } + + queue = true; break; - case RTE_FLOW_ACTION_TYPE_DROP: - actions_num++; - break; - case RTE_FLOW_ACTION_TYPE_VOID: - continue; + } default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid action type"); - return -rte_errno; + /* shouldn't happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type"); } } - - if (!((actions_num == 1 && !queue_valid) || - (actions_num == 2 && vf_valid && queue_valid))) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid action number"); - return -rte_errno; + /* QUEUE/RSS must be accompanied by PORT_REPRESENTOR */ + if (queue != vf) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Invalid action combination"); } return 0; @@ -730,6 +682,18 @@ iavf_fsub_parse(struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + .check = iavf_fsub_action_check, + .driver_ctx = ad, + }; struct iavf_fsub_conf *filter; struct iavf_pattern_match_item *pattern_match_item = NULL; struct ci_flow_attr_check_param attr_param = { @@ -749,6 +713,10 @@ iavf_fsub_parse(struct iavf_adapter *ad, if (ret) goto error; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + goto error; + if (attr->priority > 1) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, @@ -772,14 +740,8 @@ iavf_fsub_parse(struct iavf_adapter *ad, if (ret) goto error; - /* check flow subscribe pattern action */ - ret = iavf_fsub_check_action(actions, error); - if (ret) - goto error; - /* parse flow subscribe pattern action */ - ret = iavf_fsub_parse_action((void *)ad, actions, attr->priority, - error, filter); + ret = iavf_fsub_parse_action(&parsed_actions, attr->priority, error, filter); error: if (!ret && meta) diff --git a/drivers/net/intel/iavf/iavf_generic_flow.c b/drivers/net/intel/iavf/iavf_generic_flow.c index b8f6414b16..022caf5fe2 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.c +++ b/drivers/net/intel/iavf/iavf_generic_flow.c @@ -2153,13 +2153,6 @@ iavf_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - *engine = iavf_parse_engine(ad, flow, &vf->rss_parser_list, attr, pattern, actions, error); if (*engine) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 20/25] net/iavf: use common action checks for flow query 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (18 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 19/25] net/iavf: use common action checks for fsub Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 21/25] net/ice: use common flow attribute checks Anatoly Burakov ` (5 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow parsing infrastructure to validate query actions. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_generic_flow.c | 29 ++++++++++------------ 1 file changed, 13 insertions(+), 16 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_generic_flow.c b/drivers/net/intel/iavf/iavf_generic_flow.c index 022caf5fe2..9fc19576cd 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.c +++ b/drivers/net/intel/iavf/iavf_generic_flow.c @@ -17,6 +17,7 @@ #include "iavf.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" static struct iavf_engine_list engine_list = TAILQ_HEAD_INITIALIZER(engine_list); @@ -2332,7 +2333,14 @@ iavf_flow_query(struct rte_eth_dev *dev, void *data, struct rte_flow_error *error) { - int ret = -EINVAL; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_COUNT, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1 + }; struct iavf_adapter *ad = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct rte_flow_query_count *count = data; @@ -2344,19 +2352,8 @@ iavf_flow_query(struct rte_eth_dev *dev, return -rte_errno; } - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_COUNT: - ret = flow->engine->query_count(ad, flow, count, error); - break; - default: - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "action not supported"); - } - } - return ret; + if (ci_flow_check_actions(actions, ¶m, &parsed_actions, error) < 0) + return -rte_errno; + + return flow->engine->query_count(ad, flow, count, error); } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 21/25] net/ice: use common flow attribute checks 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (19 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 20/25] net/iavf: use common action checks for flow query Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 22/25] net/ice: use common action checks for hash Anatoly Burakov ` (4 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Replace custom attr checks with a call to common checks. Switch engine supports priority (0 or 1) but other engines don't, so we move the attribute checks into the engines. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_acl_filter.c | 9 ++-- drivers/net/intel/ice/ice_fdir_filter.c | 8 ++- drivers/net/intel/ice/ice_generic_flow.c | 59 +++-------------------- drivers/net/intel/ice/ice_generic_flow.h | 2 +- drivers/net/intel/ice/ice_hash.c | 11 +++-- drivers/net/intel/ice/ice_switch_filter.c | 22 +++++++-- 6 files changed, 48 insertions(+), 63 deletions(-) diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c index 6754a40044..0421578b32 100644 --- a/drivers/net/intel/ice/ice_acl_filter.c +++ b/drivers/net/intel/ice/ice_acl_filter.c @@ -27,6 +27,8 @@ #include "ice_generic_flow.h" #include "base/ice_flow.h" +#include "../common/flow_check.h" + #define MAX_ACL_SLOTS_ID 2048 #define ICE_ACL_INSET_ETH_IPV4 ( \ @@ -970,7 +972,7 @@ ice_acl_parse(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -980,8 +982,9 @@ ice_acl_parse(struct ice_adapter *ad, uint64_t input_set; int ret; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; memset(filter, 0, sizeof(*filter)); item = ice_search_pattern_match_item(ad, pattern, array, array_len, diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c index 5b27f5a077..f5b832b863 100644 --- a/drivers/net/intel/ice/ice_fdir_filter.c +++ b/drivers/net/intel/ice/ice_fdir_filter.c @@ -15,6 +15,8 @@ #include "ice_rxtx.h" #include "ice_generic_flow.h" +#include "../common/flow_check.h" + #define ICE_FDIR_IPV6_TC_OFFSET 20 #define ICE_IPV6_TC_MASK (0xFF << ICE_FDIR_IPV6_TC_OFFSET) @@ -2796,7 +2798,7 @@ ice_fdir_parse(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority __rte_unused, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -2807,6 +2809,10 @@ ice_fdir_parse(struct ice_adapter *ad, bool raw = false; int ret; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + memset(filter, 0, sizeof(*filter)); item = ice_search_pattern_match_item(ad, pattern, array, array_len, error); diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c index 62f0c334a1..fe903a975c 100644 --- a/drivers/net/intel/ice/ice_generic_flow.c +++ b/drivers/net/intel/ice/ice_generic_flow.c @@ -16,6 +16,7 @@ #include <rte_malloc.h> #include <rte_tailq.h> +#include "../common/flow_check.h" #include "ice_ethdev.h" #include "ice_generic_flow.h" @@ -1959,7 +1960,7 @@ enum rte_flow_item_type pattern_eth_ipv6_udp_l2tpv2_ppp_ipv6_tcp[] = { typedef bool (*parse_engine_t)(struct ice_adapter *ad, struct rte_flow *flow, struct ice_flow_parser *parser, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); @@ -2045,44 +2046,6 @@ ice_flow_uninit(struct ice_adapter *ad) } } -static int -ice_flow_valid_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error) -{ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->transfer) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "Not support transfer."); - return -rte_errno; - } - - if (attr->priority > 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Only support priority 0 and 1."); - return -rte_errno; - } - - return 0; -} - /* Find the first VOID or non-VOID item pointer */ static const struct rte_flow_item * ice_find_first_item(const struct rte_flow_item *item, bool is_void) @@ -2360,7 +2323,7 @@ static bool ice_parse_engine_create(struct ice_adapter *ad, struct rte_flow *flow, struct ice_flow_parser *parser, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2378,7 +2341,7 @@ ice_parse_engine_create(struct ice_adapter *ad, if (parser->parse_pattern_action(ad, parser->array, parser->array_len, - pattern, actions, priority, &meta, error) < 0) + pattern, actions, attr, &meta, error) < 0) return false; RTE_ASSERT(parser->engine->create != NULL); @@ -2390,7 +2353,7 @@ static bool ice_parse_engine_validate(struct ice_adapter *ad, struct rte_flow *flow __rte_unused, struct ice_flow_parser *parser, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2407,7 +2370,7 @@ ice_parse_engine_validate(struct ice_adapter *ad, return parser->parse_pattern_action(ad, parser->array, parser->array_len, - pattern, actions, priority, + pattern, actions, attr, NULL, error) >= 0; } @@ -2435,7 +2398,6 @@ ice_flow_process_filter(struct rte_eth_dev *dev, parse_engine_t ice_parse_engine, struct rte_flow_error *error) { - int ret = ICE_ERR_NOT_SUPPORTED; struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ice_flow_parser *parser; @@ -2460,15 +2422,10 @@ ice_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - ret = ice_flow_valid_attr(attr, error); - if (ret) - return ret; - *engine = NULL; /* always try hash engine first */ if (ice_parse_engine(ad, flow, &ice_hash_parser, - attr->priority, pattern, - actions, error)) { + attr, pattern, actions, error)) { *engine = ice_hash_parser.engine; return 0; } @@ -2489,7 +2446,7 @@ ice_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (ice_parse_engine(ad, flow, parser, attr->priority, + if (ice_parse_engine(ad, flow, parser, attr, pattern, actions, error)) { *engine = parser->engine; return 0; diff --git a/drivers/net/intel/ice/ice_generic_flow.h b/drivers/net/intel/ice/ice_generic_flow.h index 1b5514d5df..b9d6149d82 100644 --- a/drivers/net/intel/ice/ice_generic_flow.h +++ b/drivers/net/intel/ice/ice_generic_flow.h @@ -522,7 +522,7 @@ typedef int (*parse_pattern_action_t)(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c index 77829e607b..bd42bc2a4a 100644 --- a/drivers/net/intel/ice/ice_hash.c +++ b/drivers/net/intel/ice/ice_hash.c @@ -26,6 +26,8 @@ #include "ice_ethdev.h" #include "ice_generic_flow.h" +#include "../common/flow_check.h" + #define ICE_PHINT_NONE 0 #define ICE_PHINT_VLAN BIT_ULL(0) #define ICE_PHINT_PPPOE BIT_ULL(1) @@ -107,7 +109,7 @@ ice_hash_parse_pattern_action(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); @@ -1185,7 +1187,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1194,8 +1196,9 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, struct ice_rss_meta *rss_meta_ptr; uint64_t phint = ICE_PHINT_NONE; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c index b25e5eaad3..d8c0e7c59c 100644 --- a/drivers/net/intel/ice/ice_switch_filter.c +++ b/drivers/net/intel/ice/ice_switch_filter.c @@ -26,6 +26,7 @@ #include "ice_generic_flow.h" #include "ice_dcf_ethdev.h" +#include "../common/flow_check.h" #define MAX_QGRP_NUM_TYPE 7 #define MAX_INPUT_SET_BYTE 32 @@ -1768,7 +1769,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1784,6 +1785,21 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, enum ice_sw_tunnel_type tun_type = ICE_NON_TUN; struct ice_pattern_match_item *pattern_match_item = NULL; + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; + + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + return ret; + + /* Allow only two priority values - 0 or 1 */ + if (attr->priority > 1) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, NULL, + "Invalid priority for switch filter"); + return -rte_errno; + } for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { item_num++; @@ -1859,10 +1875,10 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, goto error; if (ad->hw.dcf_enabled) - ret = ice_switch_parse_dcf_action((void *)ad, actions, priority, + ret = ice_switch_parse_dcf_action((void *)ad, actions, attr->priority, error, &rule_info); else - ret = ice_switch_parse_action(pf, actions, priority, error, + ret = ice_switch_parse_action(pf, actions, attr->priority, error, &rule_info); if (ret) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 22/25] net/ice: use common action checks for hash 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (20 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 21/25] net/ice: use common flow attribute checks Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 23/25] net/ice: use common action checks for FDIR Anatoly Burakov ` (3 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for hash filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_hash.c | 178 +++++++++++++++++-------------- 1 file changed, 95 insertions(+), 83 deletions(-) diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c index bd42bc2a4a..40bac92f8a 100644 --- a/drivers/net/intel/ice/ice_hash.c +++ b/drivers/net/intel/ice/ice_hash.c @@ -1090,94 +1090,92 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func, } static int -ice_hash_parse_action(struct ice_pattern_match_item *pattern_match_item, - const struct rte_flow_action actions[], +ice_hash_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss; + + rss = actions->actions[0]->conf; + + switch (rss->func) { + case RTE_ETH_HASH_FUNCTION_DEFAULT: + case RTE_ETH_HASH_FUNCTION_TOEPLITZ: + case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR: + case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ: + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Selected RSS hash function not supported"); + } + + if (rss->level) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "a nonzero RSS encapsulation level is not supported"); + + if (rss->key_len) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "a nonzero RSS key_len is not supported"); + + if (rss->queue) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "a non-NULL RSS queue is not supported"); + + return 0; +} + +static int +ice_hash_parse_rss_action(struct ice_pattern_match_item *pattern_match_item, + const struct rte_flow_action_rss *rss, uint64_t pattern_hint, struct ice_rss_meta *rss_meta, struct rte_flow_error *error) { struct ice_rss_hash_cfg *cfg = pattern_match_item->meta; - enum rte_flow_action_type action_type; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action; uint64_t rss_type; + bool symm = false; - /* Supported action is RSS. */ - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - rss = action->conf; - rss_type = rss->types; - - /* Check hash function and save it to rss_meta. */ - if (pattern_match_item->pattern_list != - pattern_empty && rss->func == - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) { - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "Not supported flow"); - } else if (rss->func == - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR){ - rss_meta->hash_function = - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR; - return 0; - } else if (rss->func == - RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { - rss_meta->hash_function = - RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ; - if (pattern_hint == ICE_PHINT_RAW) - rss_meta->raw.symm = true; - else - cfg->symm = true; - } - - if (rss->level) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS encapsulation level is not supported"); - - if (rss->key_len) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS key_len is not supported"); - - if (rss->queue) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a non-NULL RSS queue is not supported"); - - /* If pattern type is raw, no need to refine rss type */ - if (pattern_hint == ICE_PHINT_RAW) - break; - - /** - * Check simultaneous use of SRC_ONLY and DST_ONLY - * of the same level. - */ - rss_type = rte_eth_rss_hf_refine(rss_type); - - if (ice_any_invalid_rss_type(rss->func, rss_type, - pattern_match_item->input_set_mask_o)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - action, "RSS type not supported"); - - rss_meta->cfg = *cfg; - ice_refine_hash_cfg(&rss_meta->cfg, - rss_type, pattern_hint); - break; - case RTE_FLOW_ACTION_TYPE_END: - break; - - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "Invalid action."); - return -rte_errno; + if (rss->func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) { + if (pattern_match_item->pattern_list != pattern_empty) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "XOR hash function is only supported for empty pattern"); } + rss_meta->hash_function = RTE_ETH_HASH_FUNCTION_SIMPLE_XOR; + return 0; } + if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { + rss_meta->hash_function = RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ; + symm = true; + } + + /* If pattern type is raw, no need to refine rss type */ + if (pattern_hint == ICE_PHINT_RAW) { + rss_meta->raw.symm = symm; + return 0; + } + cfg->symm = symm; + + /** + * Check simultaneous use of SRC_ONLY and DST_ONLY + * of the same level. + */ + rss_type = rte_eth_rss_hf_refine(rss->types); + + if (ice_any_invalid_rss_type(rss->func, rss_type, + pattern_match_item->input_set_mask_o)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss, "RSS type not supported"); + + rss_meta->cfg = *cfg; + ice_refine_hash_cfg(&rss_meta->cfg, + rss_type, pattern_hint); + return 0; } @@ -1191,15 +1189,29 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, void **meta, struct rte_flow_error *error) { - int ret = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_hash_parse_action_check, + }; + const struct rte_flow_action_rss *rss; struct ice_pattern_match_item *pattern_match_item; struct ice_rss_meta *rss_meta_ptr; uint64_t phint = ICE_PHINT_NONE; + int ret = 0; ret = ci_flow_check_attr(attr, NULL, error); if (ret) return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { rte_flow_error_set(error, EINVAL, @@ -1231,9 +1243,9 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, } } - /* Check rss action. */ - ret = ice_hash_parse_action(pattern_match_item, actions, phint, - rss_meta_ptr, error); + rss = parsed_actions.actions[0]->conf; + ret = ice_hash_parse_rss_action(pattern_match_item, rss, phint, + rss_meta_ptr, error); error: if (!ret && meta) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 23/25] net/ice: use common action checks for FDIR 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (21 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 22/25] net/ice: use common action checks for hash Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 24/25] net/ice: use common action checks for switch Anatoly Burakov ` (2 subsequent siblings) 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for FDIR filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_fdir_filter.c | 375 +++++++++++++----------- 1 file changed, 203 insertions(+), 172 deletions(-) diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c index f5b832b863..21eb394932 100644 --- a/drivers/net/intel/ice/ice_fdir_filter.c +++ b/drivers/net/intel/ice/ice_fdir_filter.c @@ -1690,177 +1690,6 @@ static struct ice_flow_engine ice_fdir_engine = { .type = ICE_FLOW_ENGINE_FDIR, }; -static int -ice_fdir_parse_action_qregion(struct ice_pf *pf, - struct rte_flow_error *error, - const struct rte_flow_action *act, - struct ice_fdir_filter_conf *filter) -{ - const struct rte_flow_action_rss *rss = act->conf; - uint32_t i; - - if (act->type != RTE_FLOW_ACTION_TYPE_RSS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; - } - - if (rss->queue_num <= 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Queue region size can't be 0 or 1."); - return -rte_errno; - } - - /* check if queue index for queue region is continuous */ - for (i = 0; i < rss->queue_num - 1; i++) { - if (rss->queue[i + 1] != rss->queue[i] + 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Discontinuous queue region"); - return -rte_errno; - } - } - - if (rss->queue[rss->queue_num - 1] >= pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid queue region indexes."); - return -rte_errno; - } - - if (!(rte_is_power_of_2(rss->queue_num) && - (rss->queue_num <= ICE_FDIR_MAX_QREGION_SIZE))) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "The region size should be any of the following values:" - "1, 2, 4, 8, 16, 32, 64, 128 as long as the total number " - "of queues do not exceed the VSI allocation."); - return -rte_errno; - } - - filter->input.q_index = rss->queue[0]; - filter->input.q_region = rte_fls_u32(rss->queue_num) - 1; - filter->input.dest_ctl = ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QGROUP; - - return 0; -} - -static int -ice_fdir_parse_action(struct ice_adapter *ad, - const struct rte_flow_action actions[], - struct rte_flow_error *error, - struct ice_fdir_filter_conf *filter) -{ - struct ice_pf *pf = &ad->pf; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark_spec = NULL; - const struct rte_flow_action_count *act_count; - uint32_t dest_num = 0; - uint32_t mark_num = 0; - uint32_t counter_num = 0; - int ret; - - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - - act_q = actions->conf; - filter->input.q_index = act_q->index; - if (filter->input.q_index >= - pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid queue for FDIR."); - return -rte_errno; - } - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; - break; - case RTE_FLOW_ACTION_TYPE_DROP: - dest_num++; - - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; - break; - case RTE_FLOW_ACTION_TYPE_PASSTHRU: - dest_num++; - - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; - break; - case RTE_FLOW_ACTION_TYPE_RSS: - dest_num++; - - ret = ice_fdir_parse_action_qregion(pf, - error, actions, filter); - if (ret) - return ret; - break; - case RTE_FLOW_ACTION_TYPE_MARK: - mark_num++; - filter->mark_flag = 1; - mark_spec = actions->conf; - filter->input.fltr_id = mark_spec->id; - filter->input.fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ONE; - break; - case RTE_FLOW_ACTION_TYPE_COUNT: - counter_num++; - - act_count = actions->conf; - filter->input.cnt_ena = ICE_FXD_FLTR_QW0_STAT_ENA_PKTS; - rte_memcpy(&filter->act_count, act_count, - sizeof(filter->act_count)); - - break; - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action."); - return -rte_errno; - } - } - - if (dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - - if (mark_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Too many mark actions"); - return -rte_errno; - } - - if (counter_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Too many count actions"); - return -rte_errno; - } - - if (dest_num + mark_num + counter_num == 0) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Empty action"); - return -rte_errno; - } - - /* set default action to PASSTHRU mode, in "mark/count only" case. */ - if (dest_num == 0) - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; - - return 0; -} static int ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, @@ -2792,6 +2621,188 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, return 0; } +static int +ice_fdir_parse_action(struct ice_adapter *ad, + const struct ci_flow_actions *actions, + struct rte_flow_error *error) +{ + struct ice_pf *pf = &ad->pf; + struct ice_fdir_filter_conf *filter = &pf->fdir.conf; + bool dest_set = false; + size_t i; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *act = actions->actions[i]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + dest_set = true; + + filter->input.q_index = act_q->index; + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; + break; + } + case RTE_FLOW_ACTION_TYPE_DROP: + dest_set = true; + + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; + break; + case RTE_FLOW_ACTION_TYPE_PASSTHRU: + dest_set = true; + + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; + break; + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = act->conf; + dest_set = true; + + filter->input.q_index = rss->queue[0]; + filter->input.q_region = rte_fls_u32(rss->queue_num) - 1; + filter->input.dest_ctl = ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QGROUP; + + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + const struct rte_flow_action_mark *mark_spec = act->conf; + filter->mark_flag = 1; + filter->input.fltr_id = mark_spec->id; + filter->input.fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ONE; + break; + } + case RTE_FLOW_ACTION_TYPE_COUNT: + { + const struct rte_flow_action_count *act_count = act->conf; + + filter->input.cnt_ena = ICE_FXD_FLTR_QW0_STAT_ENA_PKTS; + rte_memcpy(&filter->act_count, act_count, + sizeof(filter->act_count)); + break; + } + default: + /* Should not happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + } + + /* set default action to PASSTHRU mode, in "mark/count only" case. */ + if (!dest_set) { + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; + } + + return 0; +} + +static int +ice_fdir_check_action_qregion(struct ice_pf *pf, + struct rte_flow_error *error, + const struct rte_flow_action_rss *rss) +{ + if (rss->queue_num <= 1) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Queue region size can't be 0 or 1."); + } + + if (rss->queue[rss->queue_num - 1] >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Invalid queue region indexes."); + } + + if (!(rte_is_power_of_2(rss->queue_num) && + rss->queue_num <= ICE_FDIR_MAX_QREGION_SIZE)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "The region size should be any of the following values:" + "2, 4, 8, 16, 32, 64, 128 as long as the total number " + "of queues do not exceed the VSI allocation."); + + return 0; +} + +static int +ice_fdir_check_action(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct ice_adapter *ad = param->driver_ctx; + struct ice_pf *pf = &ad->pf; + uint32_t dest_num = 0; + uint32_t mark_num = 0; + uint32_t counter_num = 0; + size_t i; + int ret; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *act = actions->actions[i]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + dest_num++; + + if (act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid queue for FDIR."); + } + break; + } + case RTE_FLOW_ACTION_TYPE_DROP: + dest_num++; + break; + case RTE_FLOW_ACTION_TYPE_PASSTHRU: + dest_num++; + break; + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = act->conf; + + dest_num++; + ret = ice_fdir_check_action_qregion(pf, error, rss); + if (ret) + return ret; + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + mark_num++; + break; + } + case RTE_FLOW_ACTION_TYPE_COUNT: + { + counter_num++; + break; + } + default: + /* Should not happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + } + + if (dest_num > 1 || mark_num > 1 || counter_num > 1) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Unsupported action combination"); + } + + return 0; +} + static int ice_fdir_parse(struct ice_adapter *ad, struct ice_pattern_match_item *array, @@ -2802,6 +2813,21 @@ ice_fdir_parse(struct ice_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_PASSTHRU, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_COUNT, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 3, + .check = ice_fdir_check_action, + .driver_ctx = ad, + }; struct ice_pf *pf = &ad->pf; struct ice_fdir_filter_conf *filter = &pf->fdir.conf; struct ice_pattern_match_item *item = NULL; @@ -2814,6 +2840,11 @@ ice_fdir_parse(struct ice_adapter *ad, return ret; memset(filter, 0, sizeof(*filter)); + + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + item = ice_search_pattern_match_item(ad, pattern, array, array_len, error); @@ -2842,7 +2873,7 @@ ice_fdir_parse(struct ice_adapter *ad, goto error; } - ret = ice_fdir_parse_action(ad, actions, error, filter); + ret = ice_fdir_parse_action(ad, &parsed_actions, error); if (ret) goto error; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 24/25] net/ice: use common action checks for switch 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (22 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 23/25] net/ice: use common action checks for FDIR Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 25/25] net/ice: use common action checks for ACL Anatoly Burakov 2026-03-16 10:59 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Bruce Richardson 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for switch filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_switch_filter.c | 370 +++++++++++----------- 1 file changed, 184 insertions(+), 186 deletions(-) diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c index d8c0e7c59c..9a46e3b413 100644 --- a/drivers/net/intel/ice/ice_switch_filter.c +++ b/drivers/net/intel/ice/ice_switch_filter.c @@ -35,6 +35,8 @@ #define ICE_IPV4_PROTO_NVGRE 0x002F #define ICE_SW_PRI_BASE 6 +#define ICE_SW_MAX_QUEUES 128 + #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) #define ICE_SW_INSET_MAC_VLAN ( \ @@ -1527,85 +1529,38 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], } static int -ice_switch_parse_dcf_action(struct ice_dcf_adapter *ad, - const struct rte_flow_action *actions, +ice_switch_parse_dcf_action(const struct rte_flow_action *action, uint32_t priority, struct rte_flow_error *error, struct ice_adv_rule_info *rule_info) { const struct rte_flow_action_ethdev *act_ethdev; - const struct rte_flow_action *action; const struct rte_eth_dev *repr_dev; enum rte_flow_action_type action_type; - uint16_t rule_port_id, backer_port_id; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; - act_ethdev = action->conf; - - if (!rte_eth_dev_is_valid_port(act_ethdev->port_id)) - goto invalid_port_id; - - /* For traffic to original DCF port */ - rule_port_id = ad->parent.pf.dev_data->port_id; - - if (rule_port_id != act_ethdev->port_id) - goto invalid_port_id; - - rule_info->sw_act.vsi_handle = 0; - - break; - -invalid_port_id: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid port_id"); - return -rte_errno; - - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; - act_ethdev = action->conf; - - if (!rte_eth_dev_is_valid_port(act_ethdev->port_id)) - goto invalid; - - repr_dev = &rte_eth_devices[act_ethdev->port_id]; - - if (!repr_dev->data) - goto invalid; - - rule_port_id = ad->parent.pf.dev_data->port_id; - backer_port_id = repr_dev->data->backer_port_id; - - if (backer_port_id != rule_port_id) - goto invalid; - - rule_info->sw_act.vsi_handle = repr_dev->data->representor_id; - break; - -invalid: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid ethdev_port_id"); - return -rte_errno; - - case RTE_FLOW_ACTION_TYPE_DROP: - rule_info->sw_act.fltr_act = ICE_DROP_PACKET; - break; - - default: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action type"); - return -rte_errno; - } + action_type = action->type; + switch (action_type) { + case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: + rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; + rule_info->sw_act.vsi_handle = 0; + break; + + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; + act_ethdev = action->conf; + repr_dev = &rte_eth_devices[act_ethdev->port_id]; + rule_info->sw_act.vsi_handle = repr_dev->data->representor_id; + break; + + case RTE_FLOW_ACTION_TYPE_DROP: + rule_info->sw_act.fltr_act = ICE_DROP_PACKET; + break; + + default: + /* Should never reach */ + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Invalid action type"); + return -rte_errno; } rule_info->sw_act.src = rule_info->sw_act.vsi_handle; @@ -1621,73 +1576,38 @@ ice_switch_parse_dcf_action(struct ice_dcf_adapter *ad, static int ice_switch_parse_action(struct ice_pf *pf, - const struct rte_flow_action *actions, + const struct rte_flow_action *action, uint32_t priority, struct rte_flow_error *error, struct ice_adv_rule_info *rule_info) { struct ice_vsi *vsi = pf->main_vsi; - struct rte_eth_dev_data *dev_data = pf->adapter->pf.dev_data; const struct rte_flow_action_queue *act_q; const struct rte_flow_action_rss *act_qgrop; - uint16_t base_queue, i; - const struct rte_flow_action *action; + uint16_t base_queue; enum rte_flow_action_type action_type; - uint16_t valid_qgrop_number[MAX_QGRP_NUM_TYPE] = { - 2, 4, 8, 16, 32, 64, 128}; base_queue = pf->base_queue + vsi->base_queue; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - act_qgrop = action->conf; - if (act_qgrop->queue_num <= 1) - goto error; - rule_info->sw_act.fltr_act = - ICE_FWD_TO_QGRP; - rule_info->sw_act.fwd_id.q_id = - base_queue + act_qgrop->queue[0]; - for (i = 0; i < MAX_QGRP_NUM_TYPE; i++) { - if (act_qgrop->queue_num == - valid_qgrop_number[i]) - break; - } - if (i == MAX_QGRP_NUM_TYPE) - goto error; - if ((act_qgrop->queue[0] + - act_qgrop->queue_num) > - dev_data->nb_rx_queues) - goto error1; - for (i = 0; i < act_qgrop->queue_num - 1; i++) - if (act_qgrop->queue[i + 1] != - act_qgrop->queue[i] + 1) - goto error2; - rule_info->sw_act.qgrp_size = - act_qgrop->queue_num; - break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - act_q = action->conf; - if (act_q->index >= dev_data->nb_rx_queues) - goto error; - rule_info->sw_act.fltr_act = - ICE_FWD_TO_Q; - rule_info->sw_act.fwd_id.q_id = - base_queue + act_q->index; - break; - - case RTE_FLOW_ACTION_TYPE_DROP: - rule_info->sw_act.fltr_act = - ICE_DROP_PACKET; - break; - - case RTE_FLOW_ACTION_TYPE_VOID: - break; - - default: - goto error; - } + action_type = action->type; + switch (action_type) { + case RTE_FLOW_ACTION_TYPE_RSS: + act_qgrop = action->conf; + rule_info->sw_act.fltr_act = ICE_FWD_TO_QGRP; + rule_info->sw_act.fwd_id.q_id = base_queue + act_qgrop->queue[0]; + rule_info->sw_act.qgrp_size = act_qgrop->queue_num; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + act_q = action->conf; + rule_info->sw_act.fltr_act = ICE_FWD_TO_Q; + rule_info->sw_act.fwd_id.q_id = base_queue + act_q->index; + break; + case RTE_FLOW_ACTION_TYPE_DROP: + rule_info->sw_act.fltr_act = ICE_DROP_PACKET; + break; + default: + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Invalid action type or queue number"); + return -rte_errno; } rule_info->sw_act.vsi_handle = vsi->idx; @@ -1699,65 +1619,120 @@ ice_switch_parse_action(struct ice_pf *pf, rule_info->priority = ICE_SW_PRI_BASE - priority; return 0; - -error: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action type or queue number"); - return -rte_errno; - -error1: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid queue region indexes"); - return -rte_errno; - -error2: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Discontinuous queue region"); - return -rte_errno; } static int -ice_switch_check_action(const struct rte_flow_action *actions, - struct rte_flow_error *error) +ice_switch_dcf_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) { + struct ice_dcf_adapter *ad = param->driver_ctx; const struct rte_flow_action *action; enum rte_flow_action_type action_type; - uint16_t actions_num = 0; - - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - case RTE_FLOW_ACTION_TYPE_QUEUE: - case RTE_FLOW_ACTION_TYPE_DROP: - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - actions_num++; - break; - case RTE_FLOW_ACTION_TYPE_VOID: - continue; - default: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action type"); - return -rte_errno; + const struct rte_flow_action_ethdev *act_ethdev; + const struct rte_eth_dev *repr_dev; + + action = actions->actions[0]; + action_type = action->type; + + switch (action_type) { + case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + { + uint16_t expected_port_id, backer_port_id; + act_ethdev = action->conf; + + if (!rte_eth_dev_is_valid_port(act_ethdev->port_id)) + goto invalid_port_id; + + expected_port_id = ad->parent.pf.dev_data->port_id; + + if (action_type == RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR) { + if (expected_port_id != act_ethdev->port_id) + goto invalid_port_id; + } else { + repr_dev = &rte_eth_devices[act_ethdev->port_id]; + + if (!repr_dev->data) + goto invalid_port_id; + + backer_port_id = repr_dev->data->backer_port_id; + + if (backer_port_id != expected_port_id) + goto invalid_port_id; } + + break; +invalid_port_id: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid port ID"); + } + case RTE_FLOW_ACTION_TYPE_DROP: + break; + default: + /* Should never reach */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type"); } - if (actions_num != 1) { - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action number"); - return -rte_errno; + return 0; +} + +static int +ice_switch_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct ice_adapter *ad = param->driver_ctx; + struct ice_pf *pf = &ad->pf; + struct rte_eth_dev_data *dev_data = pf->dev_data; + const struct rte_flow_action *action = actions->actions[0]; + + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *act_qgrop; + act_qgrop = action->conf; + + /* Check bounds on number of queues */ + if (act_qgrop->queue_num < 2 || act_qgrop->queue_num > ICE_SW_MAX_QUEUES) + goto err_rss; + + /* must be power of 2 */ + if (!rte_is_power_of_2(act_qgrop->queue_num)) + goto err_rss; + + /* queues are monotonous and contiguous so check last queue */ + if ((act_qgrop->queue[0] + act_qgrop->queue_num) > dev_data->nb_rx_queues) + goto err_rss; + + break; +err_rss: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid queue region"); + } + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q; + act_q = action->conf; + if (act_q->index >= dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid queue"); + } + + break; + } + case RTE_FLOW_ACTION_TYPE_DROP: + break; + default: + /* Should never reach */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type"); } return 0; @@ -1788,11 +1763,38 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, struct ci_flow_attr_check_param attr_param = { .allow_priority = true, }; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param dcf_param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, + RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_switch_dcf_action_check, + }; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_switch_action_check, + .driver_ctx = ad, + }; ret = ci_flow_check_attr(attr, &attr_param, error); if (ret) return ret; + ret = ci_flow_check_actions(actions, (ad->hw.dcf_enabled) ? &dcf_param : ¶m, + &parsed_actions, error); + if (ret) + goto error; + /* Allow only two priority values - 0 or 1 */ if (attr->priority > 1) { rte_flow_error_set(error, EINVAL, @@ -1870,16 +1872,12 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, memset(&rule_info, 0, sizeof(rule_info)); rule_info.tun_type = tun_type; - ret = ice_switch_check_action(actions, error); - if (ret) - goto error; - if (ad->hw.dcf_enabled) - ret = ice_switch_parse_dcf_action((void *)ad, actions, attr->priority, - error, &rule_info); + ret = ice_switch_parse_dcf_action(parsed_actions.actions[0], + attr->priority, error, &rule_info); else - ret = ice_switch_parse_action(pf, actions, attr->priority, error, - &rule_info); + ret = ice_switch_parse_action(pf, parsed_actions.actions[0], + attr->priority, error, &rule_info); if (ret) goto error; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v2 25/25] net/ice: use common action checks for ACL 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (23 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 24/25] net/ice: use common action checks for switch Anatoly Burakov @ 2026-03-16 10:52 ` Anatoly Burakov 2026-03-16 10:59 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Bruce Richardson 25 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-03-16 10:52 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for ACL filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_acl_filter.c | 143 +++++++++++++++---------- 1 file changed, 84 insertions(+), 59 deletions(-) diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c index 0421578b32..90fd3c2c05 100644 --- a/drivers/net/intel/ice/ice_acl_filter.c +++ b/drivers/net/intel/ice/ice_acl_filter.c @@ -645,60 +645,6 @@ ice_acl_filter_free(struct rte_flow *flow) flow->rule = NULL; } -static int -ice_acl_parse_action(__rte_unused struct ice_adapter *ad, - const struct rte_flow_action actions[], - struct rte_flow_error *error, - struct ice_acl_conf *filter) -{ - struct ice_pf *pf = &ad->pf; - const struct rte_flow_action_queue *act_q; - uint32_t dest_num = 0; - - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_DROP: - dest_num++; - - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; - break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - - act_q = actions->conf; - filter->input.q_index = act_q->index; - if (filter->input.q_index >= - pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid queue for FDIR."); - return -rte_errno; - } - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; - break; - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action."); - return -rte_errno; - } - } - - if (dest_num == 0 || dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - - return 0; -} - static int ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad, const struct rte_flow_item pattern[], @@ -966,6 +912,69 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad, return 0; } +static int +ice_acl_parse_action(const struct ci_flow_actions *actions, + struct ice_acl_conf *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_action *act = actions->actions[0]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_DROP: + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + + filter->input.q_index = act_q->index; + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; + break; + } + default: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + + return 0; +} + +static int +ice_acl_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct ice_adapter *ad = param->driver_ctx; + struct ice_pf *pf = &ad->pf; + const struct rte_flow_action *act = actions->actions[0]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_DROP: + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + + if (act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid queue for ACL."); + } + break; + } + default: + /* shouldn't happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + + return 0; +} + static int ice_acl_parse(struct ice_adapter *ad, struct ice_pattern_match_item *array, @@ -976,17 +985,33 @@ ice_acl_parse(struct ice_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_acl_parse_action_check, + .driver_ctx = ad, + }; struct ice_pf *pf = &ad->pf; struct ice_acl_conf *filter = &pf->acl.conf; struct ice_pattern_match_item *item = NULL; uint64_t input_set; int ret; - ret = ci_flow_check_attr(attr, NULL, error); - if (ret) - return ret; - memset(filter, 0, sizeof(*filter)); + + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + item = ice_search_pattern_match_item(ad, pattern, array, array_len, error); if (!item) @@ -1005,7 +1030,7 @@ ice_acl_parse(struct ice_adapter *ad, goto error; } - ret = ice_acl_parse_action(ad, actions, error, filter); + ret = ice_acl_parse_action(&parsed_actions, filter, error); if (ret) goto error; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* Re: [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (24 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 25/25] net/ice: use common action checks for ACL Anatoly Burakov @ 2026-03-16 10:59 ` Bruce Richardson 25 siblings, 0 replies; 83+ messages in thread From: Bruce Richardson @ 2026-03-16 10:59 UTC (permalink / raw) To: Anatoly Burakov; +Cc: dev On Mon, Mar 16, 2026 at 10:52:25AM +0000, Anatoly Burakov wrote: > This patchset introduces common flow attr/action checking infrastructure to > some Intel PMD's (IXGBE, I40E, IAVF, and ICE). The aim is to reduce code > duplication, simplify implementation of new parsers/verification of existing > ones, and make action/attr handling more consistent across drivers. > > v2: > - Rebase on latest main > - Now depends on series 37585 [1] > > [1] https://patches.dpdk.org/project/dpdk/list/?series=37585 > Since patchset is deferred, the link shows nothing unless we override the state value. This link below should work better: https://patches.dpdk.org/project/dpdk/list/?series=37585&state=* ^ permalink raw reply [flat|nested] 83+ messages in thread
* [PATCH v3 00/29] Add common flow attr/action parsing infrastructure to Intel PMD's 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov ` (25 preceding siblings ...) 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov @ 2026-04-10 13:12 ` Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 01/29] net/ixgbe: fix shared PF pointer in representor Anatoly Burakov ` (28 more replies) 26 siblings, 29 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:12 UTC (permalink / raw) To: dev This patchset introduces common flow attr/action checking infrastructure to some Intel PMD's (IXGBE, I40E, IAVF, and ICE). The aim is to reduce code duplication, simplify implementation of new parsers/verification of existing ones, and make action/attr handling more consistent across drivers. v3: - Rebase on latest next-net-intel - Added 4 new commits that have to do with not using `rte_eth_dev` in rte_flow - Converted the remaining commits to use adapter structures everywhere - Minor fixes in how return values are handled - Fixed incorrect check in i40e RSS validation v2: - Rebase on latest main - Now depends on series 37585 [1] [1] https://patches.dpdk.org/project/dpdk/list/?series=37585 Anatoly Burakov (24): net/ixgbe: fix shared PF pointer in representor net/ixgbe: store max VFs in adapter net/ixgbe: reduce FDIR conf macro usage net/ixgbe: use adapter in flow-related calls net/intel/common: add common flow action parsing net/intel/common: add common flow attr validation net/ixgbe: use common checks in ethertype filter net/ixgbe: use common checks in syn filter net/ixgbe: use common checks in L2 tunnel filter net/ixgbe: use common checks in ntuple filter net/ixgbe: use common checks in security filter net/ixgbe: use common checks in FDIR filters net/ixgbe: use common checks in RSS filter net/i40e: use common flow attribute checks net/i40e: refactor RSS flow parameter checks net/i40e: use common action checks for ethertype net/i40e: use common action checks for FDIR net/i40e: use common action checks for tunnel net/iavf: use common flow attribute checks net/iavf: use common action checks for IPsec net/iavf: use common action checks for hash net/iavf: use common action checks for FDIR net/iavf: use common action checks for fsub net/iavf: use common action checks for flow query Vladimir Medvedkin (5): net/ice: use common flow attribute checks net/ice: use common action checks for hash net/ice: use common action checks for FDIR net/ice: use common action checks for switch net/ice: use common action checks for ACL drivers/net/intel/common/flow_check.h | 307 +++++ drivers/net/intel/i40e/i40e_ethdev.h | 1 - drivers/net/intel/i40e/i40e_flow.c | 433 +++--- drivers/net/intel/i40e/i40e_hash.c | 437 +++--- drivers/net/intel/i40e/i40e_hash.h | 2 +- drivers/net/intel/iavf/iavf_fdir.c | 367 +++-- drivers/net/intel/iavf/iavf_fsub.c | 266 ++-- drivers/net/intel/iavf/iavf_generic_flow.c | 105 +- drivers/net/intel/iavf/iavf_generic_flow.h | 2 +- drivers/net/intel/iavf/iavf_hash.c | 153 +-- drivers/net/intel/iavf/iavf_ipsec_crypto.c | 43 +- drivers/net/intel/ice/ice_acl_filter.c | 146 +- drivers/net/intel/ice/ice_fdir_filter.c | 383 +++--- drivers/net/intel/ice/ice_generic_flow.c | 59 +- drivers/net/intel/ice/ice_generic_flow.h | 2 +- drivers/net/intel/ice/ice_hash.c | 189 +-- drivers/net/intel/ice/ice_switch_filter.c | 388 +++--- drivers/net/intel/ixgbe/ixgbe_ethdev.c | 101 +- drivers/net/intel/ixgbe/ixgbe_ethdev.h | 29 +- drivers/net/intel/ixgbe/ixgbe_fdir.c | 113 +- drivers/net/intel/ixgbe/ixgbe_flow.c | 1187 ++++++----------- drivers/net/intel/ixgbe/ixgbe_rxtx.c | 10 +- .../net/intel/ixgbe/ixgbe_vf_representor.c | 63 +- 23 files changed, 2389 insertions(+), 2397 deletions(-) create mode 100644 drivers/net/intel/common/flow_check.h -- 2.47.3 ^ permalink raw reply [flat|nested] 83+ messages in thread
* [PATCH v3 01/29] net/ixgbe: fix shared PF pointer in representor 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov @ 2026-04-10 13:12 ` Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 02/29] net/ixgbe: store max VFs in adapter Anatoly Burakov ` (27 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:12 UTC (permalink / raw) To: dev, Vladimir Medvedkin, Declan Doherty, Ferruh Yigit, Mohammad Abdul Awal, Remy Horton Currently, ixgbe representor private data stores a PF ethdev pointer. That pointer is process local, but it is stored in shared memory, so a secondary process can read an invalid pointer value. Fix this by storing PF port id in representor private data and resolving PF ethdev from rte_eth_devices[] in each process. Return -ENODEV when the PF port is not valid. Fixes: cf80ba6e2038 ("net/ixgbe: add support for representor ports") Cc: declan.doherty@intel.com Cc: stable@dpdk.org Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_ethdev.c | 2 +- drivers/net/intel/ixgbe/ixgbe_ethdev.h | 2 +- .../net/intel/ixgbe/ixgbe_vf_representor.c | 63 ++++++++++++++----- 3 files changed, 50 insertions(+), 17 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c index 57d929cf2c..6f758b802d 100644 --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c @@ -1818,7 +1818,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, representor.vf_id = eth_da.representor_ports[i]; representor.switch_domain_id = vfinfo->switch_domain_id; - representor.pf_ethdev = pf_ethdev; + representor.pf_port_id = pf_ethdev->data->port_id; /* representor port net_bdf_port */ snprintf(name, sizeof(name), "net_%s_representor_%d", diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h index 32d7b98ed1..a014dbff10 100644 --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h @@ -504,7 +504,7 @@ struct ixgbe_adapter { struct ixgbe_vf_representor { uint16_t vf_id; uint16_t switch_domain_id; - struct rte_eth_dev *pf_ethdev; + uint16_t pf_port_id; }; int ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params); diff --git a/drivers/net/intel/ixgbe/ixgbe_vf_representor.c b/drivers/net/intel/ixgbe/ixgbe_vf_representor.c index 901d80e406..52b43530c0 100644 --- a/drivers/net/intel/ixgbe/ixgbe_vf_representor.c +++ b/drivers/net/intel/ixgbe/ixgbe_vf_representor.c @@ -13,14 +13,27 @@ #include "ixgbe_rxtx.h" #include "rte_pmd_ixgbe.h" +static struct rte_eth_dev * +ixgbe_vf_representor_pf_get(const struct ixgbe_vf_representor *representor) +{ + if (!rte_eth_dev_is_valid_port(representor->pf_port_id)) + return NULL; + + return &rte_eth_devices[representor->pf_port_id]; +} + static int ixgbe_vf_representor_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) { struct ixgbe_vf_representor *representor = ethdev->data->dev_private; + struct rte_eth_dev *pf_ethdev = ixgbe_vf_representor_pf_get(representor); - return ixgbe_dev_link_update_share(representor->pf_ethdev, + if (pf_ethdev == NULL) + return -ENODEV; + + return ixgbe_dev_link_update_share(pf_ethdev, wait_to_complete, 0); } @@ -29,9 +42,13 @@ ixgbe_vf_representor_mac_addr_set(struct rte_eth_dev *ethdev, struct rte_ether_addr *mac_addr) { struct ixgbe_vf_representor *representor = ethdev->data->dev_private; + struct rte_eth_dev *pf_ethdev = ixgbe_vf_representor_pf_get(representor); + + if (pf_ethdev == NULL) + return -ENODEV; return rte_pmd_ixgbe_set_vf_mac_addr( - representor->pf_ethdev->data->port_id, + pf_ethdev->data->port_id, representor->vf_id, mac_addr); } @@ -40,11 +57,14 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev, struct rte_eth_dev_info *dev_info) { struct ixgbe_vf_representor *representor = ethdev->data->dev_private; + struct rte_eth_dev *pf_ethdev = ixgbe_vf_representor_pf_get(representor); - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW( - representor->pf_ethdev->data->dev_private); + if (pf_ethdev == NULL) + return -ENODEV; - dev_info->device = representor->pf_ethdev->device; + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(pf_ethdev->data->dev_private); + + dev_info->device = pf_ethdev->device; dev_info->min_rx_bufsize = 1024; /**< Minimum size of RX buffer. */ @@ -70,11 +90,11 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev, /**< Device TX offload capabilities. */ dev_info->speed_capa = - representor->pf_ethdev->data->dev_link.link_speed; + pf_ethdev->data->dev_link.link_speed; /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */ dev_info->switch_info.name = - representor->pf_ethdev->device->name; + pf_ethdev->device->name; dev_info->switch_info.domain_id = representor->switch_domain_id; dev_info->switch_info.port_id = representor->vf_id; @@ -123,10 +143,14 @@ ixgbe_vf_representor_vlan_filter_set(struct rte_eth_dev *ethdev, uint16_t vlan_id, int on) { struct ixgbe_vf_representor *representor = ethdev->data->dev_private; + struct rte_eth_dev *pf_ethdev = ixgbe_vf_representor_pf_get(representor); uint64_t vf_mask = 1ULL << representor->vf_id; + if (pf_ethdev == NULL) + return -ENODEV; + return rte_pmd_ixgbe_set_vf_vlan_filter( - representor->pf_ethdev->data->port_id, vlan_id, vf_mask, on); + pf_ethdev->data->port_id, vlan_id, vf_mask, on); } static void @@ -134,8 +158,12 @@ ixgbe_vf_representor_vlan_strip_queue_set(struct rte_eth_dev *ethdev, __rte_unused uint16_t rx_queue_id, int on) { struct ixgbe_vf_representor *representor = ethdev->data->dev_private; + struct rte_eth_dev *pf_ethdev = ixgbe_vf_representor_pf_get(representor); - rte_pmd_ixgbe_set_vf_vlan_stripq(representor->pf_ethdev->data->port_id, + if (pf_ethdev == NULL) + return; + + rte_pmd_ixgbe_set_vf_vlan_stripq(pf_ethdev->data->port_id, representor->vf_id, on); } @@ -175,6 +203,7 @@ int ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params) { struct ixgbe_vf_representor *representor = ethdev->data->dev_private; + struct rte_eth_dev *pf_ethdev; struct ixgbe_vf_info *vf_data; struct rte_pci_device *pci_dev; @@ -187,17 +216,21 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params) ((struct ixgbe_vf_representor *)init_params)->vf_id; representor->switch_domain_id = ((struct ixgbe_vf_representor *)init_params)->switch_domain_id; - representor->pf_ethdev = - ((struct ixgbe_vf_representor *)init_params)->pf_ethdev; + representor->pf_port_id = + ((struct ixgbe_vf_representor *)init_params)->pf_port_id; - pci_dev = RTE_ETH_DEV_TO_PCI(representor->pf_ethdev); + pf_ethdev = ixgbe_vf_representor_pf_get(representor); + if (pf_ethdev == NULL) + return -ENODEV; + + pci_dev = RTE_ETH_DEV_TO_PCI(pf_ethdev); if (representor->vf_id >= pci_dev->max_vfs) return -ENODEV; ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR; ethdev->data->representor_id = representor->vf_id; - ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id; + ethdev->data->backer_port_id = pf_ethdev->data->port_id; /* Set representor device ops */ ethdev->dev_ops = &ixgbe_vf_representor_dev_ops; @@ -214,13 +247,13 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params) /* Reference VF mac address from PF data structure */ vf_data = *IXGBE_DEV_PRIVATE_TO_P_VFDATA( - representor->pf_ethdev->data->dev_private); + pf_ethdev->data->dev_private); ethdev->data->mac_addrs = (struct rte_ether_addr *) vf_data[representor->vf_id].vf_mac_addresses; /* Link state. Inherited from PF */ - link = &representor->pf_ethdev->data->dev_link; + link = &pf_ethdev->data->dev_link; ethdev->data->dev_link.link_speed = link->link_speed; ethdev->data->dev_link.link_duplex = link->link_duplex; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 02/29] net/ixgbe: store max VFs in adapter 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 01/29] net/ixgbe: fix shared PF pointer in representor Anatoly Burakov @ 2026-04-10 13:12 ` Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 03/29] net/ixgbe: reduce FDIR conf macro usage Anatoly Burakov ` (26 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:12 UTC (permalink / raw) To: dev, Vladimir Medvedkin Currently, rte_flow-related checks use `rte_eth_dev` to check for max VFs. With coming rework of flows, the aim is to make the code as multiprocess agnostic as possible, and `rte_eth_dev` pointer is process-local. To support this in VF-related checks, cache max_vfs in ixgbe adapter during device init and read it in the rte_flow check paths to avoid direct dependency on `rte_eth_dev`. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_ethdev.c | 3 ++- drivers/net/intel/ixgbe/ixgbe_ethdev.h | 4 ++++ drivers/net/intel/ixgbe/ixgbe_flow.c | 8 ++++---- 3 files changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c index 6f758b802d..4cfaf47a38 100644 --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c @@ -1084,7 +1084,7 @@ ixgbe_parse_devargs(struct ixgbe_adapter *adapter, static int eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) { - struct ixgbe_adapter *ad = eth_dev->data->dev_private; + struct ixgbe_adapter *ad = IXGBE_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); struct rte_intr_handle *intr_handle = pci_dev->intr_handle; struct ixgbe_hw *hw = @@ -1151,6 +1151,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) hw->vendor_id = pci_dev->id.vendor_id; hw->hw_addr = (void *)pci_dev->mem_resource[0].addr; hw->allow_unsupported_sfp = 1; + ad->max_vfs = pci_dev->max_vfs; /* Initialize the shared code (base driver) */ #ifdef RTE_LIBRTE_IXGBE_BYPASS diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h index a014dbff10..04e5e014d9 100644 --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h @@ -491,6 +491,7 @@ struct ixgbe_adapter { /* Used for limiting SDP3 TX_DISABLE checks */ uint8_t sdp3_no_tx_disable; + uint16_t max_vfs; /* Used for VF link sync with PF's physical and logical (by checking * mailbox status) link status. @@ -515,6 +516,9 @@ uint16_t ixgbe_vf_representor_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts #define IXGBE_DEV_FDIR_CONF(dev) \ (&((struct ixgbe_adapter *)(dev)->data->dev_private)->fdir_conf) +#define IXGBE_DEV_PRIVATE_TO_ADAPTER(adapter) \ + ((struct ixgbe_adapter *)adapter) + #define IXGBE_DEV_PRIVATE_TO_HW(adapter)\ (&((struct ixgbe_adapter *)adapter)->hw) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 01cd4f9bde..c40ef02f23 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -1272,7 +1272,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev, const struct rte_flow_item_e_tag *e_tag_mask; const struct rte_flow_action *act; const struct rte_flow_action_vf *act_vf; - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct ixgbe_adapter *ad = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); if (!pattern) { rte_flow_error_set(error, EINVAL, @@ -1404,7 +1404,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev, act_vf = (const struct rte_flow_action_vf *)act->conf; filter->pool = act_vf->id; } else { - filter->pool = pci_dev->max_vfs; + filter->pool = ad->max_vfs; } /* check if the next not void item is END */ @@ -1430,7 +1430,7 @@ ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev, { int ret = 0; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct ixgbe_adapter *ad = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); uint16_t vf_num; ret = cons_parse_l2_tn_filter(dev, attr, pattern, @@ -1447,7 +1447,7 @@ ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev, return -rte_errno; } - vf_num = pci_dev->max_vfs; + vf_num = ad->max_vfs; if (l2_tn_filter->pool > vf_num) return -rte_errno; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 03/29] net/ixgbe: reduce FDIR conf macro usage 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 01/29] net/ixgbe: fix shared PF pointer in representor Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 02/29] net/ixgbe: store max VFs in adapter Anatoly Burakov @ 2026-04-10 13:12 ` Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 04/29] net/ixgbe: use adapter in flow-related calls Anatoly Burakov ` (25 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:12 UTC (permalink / raw) To: dev, Vladimir Medvedkin Currently, there are quite a few places where FDIR_CONF macro is used repeatedly within the same function. Change these instances to only get the fdir conf pointer once, and use the pointer instead. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_ethdev.c | 3 +- drivers/net/intel/ixgbe/ixgbe_fdir.c | 39 ++++++++++++++++---------- 2 files changed, 26 insertions(+), 16 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c index 4cfaf47a38..076cc632bf 100644 --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c @@ -2613,6 +2613,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); struct ixgbe_vf_info *vfinfo = *IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -2717,7 +2718,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev) /* Configure DCB hw */ ixgbe_configure_dcb(dev); - if (IXGBE_DEV_FDIR_CONF(dev)->mode != RTE_FDIR_MODE_NONE) { + if (fdir_conf->mode != RTE_FDIR_MODE_NONE) { err = ixgbe_fdir_configure(dev); if (err) goto error; diff --git a/drivers/net/intel/ixgbe/ixgbe_fdir.c b/drivers/net/intel/ixgbe/ixgbe_fdir.c index 0bdfbd411a..e07c2adbda 100644 --- a/drivers/net/intel/ixgbe/ixgbe_fdir.c +++ b/drivers/net/intel/ixgbe/ixgbe_fdir.c @@ -253,6 +253,7 @@ static int fdir_set_input_mask_82599(struct rte_eth_dev *dev) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); struct ixgbe_hw_fdir_info *info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); /* @@ -316,7 +317,7 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev) reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRDIP4M); *reg = ~(info->mask.dst_ipv4_mask); - if (IXGBE_DEV_FDIR_CONF(dev)->mode == RTE_FDIR_MODE_SIGNATURE) { + if (fdir_conf->mode == RTE_FDIR_MODE_SIGNATURE) { /* * Store source and destination IPv6 masks (bit reversed) */ @@ -337,6 +338,7 @@ static int fdir_set_input_mask_x550(struct rte_eth_dev *dev) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); struct ixgbe_hw_fdir_info *info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); /* mask VM pool and DIPv6 since there are currently not supported @@ -345,7 +347,7 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev) uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 | IXGBE_FDIRM_FLEX; uint32_t fdiripv6m; - enum rte_fdir_mode mode = IXGBE_DEV_FDIR_CONF(dev)->mode; + enum rte_fdir_mode mode = fdir_conf->mode; uint16_t mac_mask; PMD_INIT_FUNC_TRACE(); @@ -468,7 +470,8 @@ static int ixgbe_fdir_store_input_mask(struct rte_eth_dev *dev, const struct rte_eth_fdir_masks *input_mask) { - enum rte_fdir_mode mode = IXGBE_DEV_FDIR_CONF(dev)->mode; + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + enum rte_fdir_mode mode = fdir_conf->mode; if (mode >= RTE_FDIR_MODE_SIGNATURE && mode <= RTE_FDIR_MODE_PERFECT) @@ -484,7 +487,8 @@ ixgbe_fdir_store_input_mask(struct rte_eth_dev *dev, int ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev) { - enum rte_fdir_mode mode = IXGBE_DEV_FDIR_CONF(dev)->mode; + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + enum rte_fdir_mode mode = fdir_conf->mode; if (mode >= RTE_FDIR_MODE_SIGNATURE && mode <= RTE_FDIR_MODE_PERFECT) @@ -635,10 +639,11 @@ int ixgbe_fdir_configure(struct rte_eth_dev *dev) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); int err; uint32_t fdirctrl, pbsize; int i; - enum rte_fdir_mode mode = IXGBE_DEV_FDIR_CONF(dev)->mode; + enum rte_fdir_mode mode = fdir_conf->mode; PMD_INIT_FUNC_TRACE(); @@ -659,7 +664,7 @@ ixgbe_fdir_configure(struct rte_eth_dev *dev) mode != RTE_FDIR_MODE_PERFECT) return -ENOSYS; - err = configure_fdir_flags(IXGBE_DEV_FDIR_CONF(dev), &fdirctrl); + err = configure_fdir_flags(fdir_conf, &fdirctrl); if (err) return err; @@ -681,12 +686,12 @@ ixgbe_fdir_configure(struct rte_eth_dev *dev) for (i = 1; i < 8; i++) IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0); - err = fdir_set_input_mask(dev, &IXGBE_DEV_FDIR_CONF(dev)->mask); + err = fdir_set_input_mask(dev, &fdir_conf->mask); if (err < 0) { PMD_INIT_LOG(ERR, " Error on setting FD mask"); return err; } - err = ixgbe_set_fdir_flex_conf(dev, &IXGBE_DEV_FDIR_CONF(dev)->flex_conf, + err = ixgbe_set_fdir_flex_conf(dev, &fdir_conf->flex_conf, &fdirctrl); if (err < 0) { PMD_INIT_LOG(ERR, " Error on setting FD flexible arguments."); @@ -1115,6 +1120,7 @@ ixgbe_fdir_filter_program(struct rte_eth_dev *dev, bool update) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); uint32_t fdircmd_flags; uint32_t fdirhash; uint8_t queue; @@ -1122,7 +1128,7 @@ ixgbe_fdir_filter_program(struct rte_eth_dev *dev, int err; struct ixgbe_hw_fdir_info *info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); - enum rte_fdir_mode fdir_mode = IXGBE_DEV_FDIR_CONF(dev)->mode; + enum rte_fdir_mode fdir_mode = fdir_conf->mode; struct ixgbe_fdir_filter *node; bool add_node = FALSE; @@ -1167,12 +1173,12 @@ ixgbe_fdir_filter_program(struct rte_eth_dev *dev, return -ENOTSUP; } fdirhash = atr_compute_perfect_hash_82599(&rule->ixgbe_fdir, - IXGBE_DEV_FDIR_CONF(dev)->pballoc); + fdir_conf->pballoc); fdirhash |= rule->soft_id << IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT; } else fdirhash = atr_compute_sig_hash_82599(&rule->ixgbe_fdir, - IXGBE_DEV_FDIR_CONF(dev)->pballoc); + fdir_conf->pballoc); if (del) { err = ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir); @@ -1190,7 +1196,7 @@ ixgbe_fdir_filter_program(struct rte_eth_dev *dev, fdircmd_flags = (update) ? IXGBE_FDIRCMD_FILTER_UPDATE : 0; if (rule->fdirflags & IXGBE_FDIRCMD_DROP) { if (is_perfect) { - queue = IXGBE_DEV_FDIR_CONF(dev)->drop_queue; + queue = fdir_conf->drop_queue; fdircmd_flags |= IXGBE_FDIRCMD_DROP; } else { PMD_DRV_LOG(ERR, "Drop option is not supported in" @@ -1281,6 +1287,7 @@ void ixgbe_fdir_info_get(struct rte_eth_dev *dev, struct rte_eth_fdir_info *fdir_info) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); struct ixgbe_hw_fdir_info *info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); uint32_t fdirctrl, max_num; @@ -1290,7 +1297,7 @@ ixgbe_fdir_info_get(struct rte_eth_dev *dev, struct rte_eth_fdir_info *fdir_info offset = ((fdirctrl & IXGBE_FDIRCTRL_FLEX_MASK) >> IXGBE_FDIRCTRL_FLEX_SHIFT) * sizeof(uint16_t); - fdir_info->mode = IXGBE_DEV_FDIR_CONF(dev)->mode; + fdir_info->mode = fdir_conf->mode; max_num = (1 << (FDIRENTRIES_NUM_SHIFT + (fdirctrl & FDIRCTRL_PBALLOC_MASK))); if (fdir_info->mode >= RTE_FDIR_MODE_PERFECT && @@ -1340,10 +1347,11 @@ void ixgbe_fdir_stats_get(struct rte_eth_dev *dev, struct rte_eth_fdir_stats *fdir_stats) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); struct ixgbe_hw_fdir_info *info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); uint32_t reg, max_num; - enum rte_fdir_mode fdir_mode = IXGBE_DEV_FDIR_CONF(dev)->mode; + enum rte_fdir_mode fdir_mode = fdir_conf->mode; /* Get the information from registers */ reg = IXGBE_READ_REG(hw, IXGBE_FDIRFREE); @@ -1396,11 +1404,12 @@ void ixgbe_fdir_filter_restore(struct rte_eth_dev *dev) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); struct ixgbe_hw_fdir_info *fdir_info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); struct ixgbe_fdir_filter *node; bool is_perfect = FALSE; - enum rte_fdir_mode fdir_mode = IXGBE_DEV_FDIR_CONF(dev)->mode; + enum rte_fdir_mode fdir_mode = fdir_conf->mode; if (fdir_mode >= RTE_FDIR_MODE_PERFECT && fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 04/29] net/ixgbe: use adapter in flow-related calls 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (2 preceding siblings ...) 2026-04-10 13:12 ` [PATCH v3 03/29] net/ixgbe: reduce FDIR conf macro usage Anatoly Burakov @ 2026-04-10 13:12 ` Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 05/29] net/intel/common: add common flow action parsing Anatoly Burakov ` (24 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:12 UTC (permalink / raw) To: dev, Vladimir Medvedkin Currently, a lot of rte_flow-related code paths depend on using the `dev` pointer. This has been okay up until now, because all the infrastructure surrounding rte_flow has been ad-hoc and did not have any persistent driver identification mechanism that works across multiple drivers, so any API call was tied to immediate rte_eth_dev API invocation. However, with coming shared infrastructure, we can no longer rely on things that are process-local (such as `dev` pointer), and because most calls can be implemented using `adapter` anyway, we'll just switch the flow-related internal calls to use `adapter` instead of `dev`. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_ethdev.c | 93 ++++++++++++++------------ drivers/net/intel/ixgbe/ixgbe_ethdev.h | 23 ++++--- drivers/net/intel/ixgbe/ixgbe_fdir.c | 90 +++++++++++++------------ drivers/net/intel/ixgbe/ixgbe_flow.c | 43 +++++++----- drivers/net/intel/ixgbe/ixgbe_rxtx.c | 10 +-- 5 files changed, 143 insertions(+), 116 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.c b/drivers/net/intel/ixgbe/ixgbe_ethdev.c index 076cc632bf..30718a487f 100644 --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.c @@ -302,9 +302,9 @@ static int ixgbevf_add_mac_addr(struct rte_eth_dev *dev, static void ixgbevf_remove_mac_addr(struct rte_eth_dev *dev, uint32_t index); static int ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); -static int ixgbe_add_5tuple_filter(struct rte_eth_dev *dev, +static int ixgbe_add_5tuple_filter(struct ixgbe_adapter *adapter, struct ixgbe_5tuple_filter *filter); -static void ixgbe_remove_5tuple_filter(struct rte_eth_dev *dev, +static void ixgbe_remove_5tuple_filter(struct ixgbe_adapter *adapter, struct ixgbe_5tuple_filter *filter); static int ixgbe_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); @@ -2611,8 +2611,9 @@ ixgbe_flow_ctrl_enable(struct rte_eth_dev *dev, struct ixgbe_hw *hw) static int ixgbe_dev_start(struct rte_eth_dev *dev) { - struct ixgbe_hw *hw = - IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_adapter *adapter = + IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); struct ixgbe_vf_info *vfinfo = *IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private); @@ -2719,7 +2720,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev) ixgbe_configure_dcb(dev); if (fdir_conf->mode != RTE_FDIR_MODE_NONE) { - err = ixgbe_fdir_configure(dev); + err = ixgbe_fdir_configure(adapter); if (err) goto error; } @@ -6445,13 +6446,13 @@ ixgbevf_set_default_mac_addr(struct rte_eth_dev *dev, } int -ixgbe_syn_filter_set(struct rte_eth_dev *dev, +ixgbe_syn_filter_set(struct ixgbe_adapter *adapter, struct rte_eth_syn_filter *filter, bool add) { - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); struct ixgbe_filter_info *filter_info = - IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter); uint32_t syn_info; uint32_t synqf; @@ -6499,10 +6500,10 @@ convert_protocol_type(uint8_t protocol_value) /* inject a 5-tuple filter to HW */ static inline void -ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev, +ixgbe_inject_5tuple_filter(struct ixgbe_adapter *adapter, struct ixgbe_5tuple_filter *filter) { - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); int i; uint32_t ftqf, sdpqf; uint32_t l34timir = 0; @@ -6557,11 +6558,11 @@ ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev, * - On failure, a negative value. */ static int -ixgbe_add_5tuple_filter(struct rte_eth_dev *dev, +ixgbe_add_5tuple_filter(struct ixgbe_adapter *adapter, struct ixgbe_5tuple_filter *filter) { struct ixgbe_filter_info *filter_info = - IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter); int i, idx, shift; /* @@ -6585,7 +6586,7 @@ ixgbe_add_5tuple_filter(struct rte_eth_dev *dev, return -ENOSYS; } - ixgbe_inject_5tuple_filter(dev, filter); + ixgbe_inject_5tuple_filter(adapter, filter); return 0; } @@ -6598,12 +6599,12 @@ ixgbe_add_5tuple_filter(struct rte_eth_dev *dev, * filter: the pointer of the filter will be removed. */ static void -ixgbe_remove_5tuple_filter(struct rte_eth_dev *dev, +ixgbe_remove_5tuple_filter(struct ixgbe_adapter *adapter, struct ixgbe_5tuple_filter *filter) { - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); struct ixgbe_filter_info *filter_info = - IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter); uint16_t index = filter->index; filter_info->fivetuple_mask[index / (sizeof(uint32_t) * NBBY)] &= @@ -6771,12 +6772,12 @@ ntuple_filter_to_5tuple(struct rte_eth_ntuple_filter *filter, * - On failure, a negative value. */ int -ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev, +ixgbe_add_del_ntuple_filter(struct ixgbe_adapter *adapter, struct rte_eth_ntuple_filter *ntuple_filter, bool add) { struct ixgbe_filter_info *filter_info = - IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter); struct ixgbe_5tuple_filter_info filter_5tuple; struct ixgbe_5tuple_filter *filter; int ret; @@ -6811,25 +6812,25 @@ ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev, &filter_5tuple, sizeof(struct ixgbe_5tuple_filter_info)); filter->queue = ntuple_filter->queue; - ret = ixgbe_add_5tuple_filter(dev, filter); + ret = ixgbe_add_5tuple_filter(adapter, filter); if (ret < 0) { rte_free(filter); return ret; } } else - ixgbe_remove_5tuple_filter(dev, filter); + ixgbe_remove_5tuple_filter(adapter, filter); return 0; } int -ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev, +ixgbe_add_del_ethertype_filter(struct ixgbe_adapter *adapter, struct rte_eth_ethertype_filter *filter, bool add) { - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); struct ixgbe_filter_info *filter_info = - IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter); uint32_t etqf = 0; uint32_t etqs = 0; int ret; @@ -7699,11 +7700,11 @@ ixgbe_e_tag_enable(struct ixgbe_hw *hw) } static int -ixgbe_e_tag_filter_del(struct rte_eth_dev *dev, +ixgbe_e_tag_filter_del(struct ixgbe_adapter *adapter, struct ixgbe_l2_tunnel_conf *l2_tunnel) { int ret = 0; - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); uint32_t i, rar_entries; uint32_t rar_low, rar_high; @@ -7736,11 +7737,11 @@ ixgbe_e_tag_filter_del(struct rte_eth_dev *dev, } static int -ixgbe_e_tag_filter_add(struct rte_eth_dev *dev, +ixgbe_e_tag_filter_add(struct ixgbe_adapter *adapter, struct ixgbe_l2_tunnel_conf *l2_tunnel) { int ret = 0; - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); uint32_t i, rar_entries; uint32_t rar_low, rar_high; @@ -7752,7 +7753,7 @@ ixgbe_e_tag_filter_add(struct rte_eth_dev *dev, } /* One entry for one tunnel. Try to remove potential existing entry. */ - ixgbe_e_tag_filter_del(dev, l2_tunnel); + ixgbe_e_tag_filter_del(adapter, l2_tunnel); rar_entries = ixgbe_get_num_rx_addrs(hw); @@ -7841,13 +7842,13 @@ ixgbe_remove_l2_tn_filter(struct ixgbe_l2_tn_info *l2_tn_info, /* Add l2 tunnel filter */ int -ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev, +ixgbe_dev_l2_tunnel_filter_add(struct ixgbe_adapter *adapter, struct ixgbe_l2_tunnel_conf *l2_tunnel, bool restore) { int ret; struct ixgbe_l2_tn_info *l2_tn_info = - IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter); struct ixgbe_l2_tn_key key; struct ixgbe_l2_tn_filter *node; @@ -7882,7 +7883,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev, switch (l2_tunnel->l2_tunnel_type) { case RTE_ETH_L2_TUNNEL_TYPE_E_TAG: - ret = ixgbe_e_tag_filter_add(dev, l2_tunnel); + ret = ixgbe_e_tag_filter_add(adapter, l2_tunnel); break; default: PMD_DRV_LOG(ERR, "Invalid tunnel type"); @@ -7898,12 +7899,12 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev, /* Delete l2 tunnel filter */ int -ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev, +ixgbe_dev_l2_tunnel_filter_del(struct ixgbe_adapter *adapter, struct ixgbe_l2_tunnel_conf *l2_tunnel) { int ret; struct ixgbe_l2_tn_info *l2_tn_info = - IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter); struct ixgbe_l2_tn_key key; key.l2_tn_type = l2_tunnel->l2_tunnel_type; @@ -7914,7 +7915,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev, switch (l2_tunnel->l2_tunnel_type) { case RTE_ETH_L2_TUNNEL_TYPE_E_TAG: - ret = ixgbe_e_tag_filter_del(dev, l2_tunnel); + ret = ixgbe_e_tag_filter_del(adapter, l2_tunnel); break; default: PMD_DRV_LOG(ERR, "Invalid tunnel type"); @@ -8311,12 +8312,14 @@ int ixgbe_enable_sec_tx_path_generic(struct ixgbe_hw *hw) static inline void ixgbe_ntuple_filter_restore(struct rte_eth_dev *dev) { + struct ixgbe_adapter *adapter = + IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ixgbe_filter_info *filter_info = IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); struct ixgbe_5tuple_filter *node; TAILQ_FOREACH(node, &filter_info->fivetuple_list, entries) { - ixgbe_inject_5tuple_filter(dev, node); + ixgbe_inject_5tuple_filter(adapter, node); } } @@ -8361,8 +8364,10 @@ ixgbe_syn_filter_restore(struct rte_eth_dev *dev) static inline void ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev) { + struct ixgbe_adapter *adapter = + IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ixgbe_l2_tn_info *l2_tn_info = - IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(adapter); struct ixgbe_l2_tn_filter *node; struct ixgbe_l2_tunnel_conf l2_tn_conf; @@ -8370,7 +8375,8 @@ ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev) l2_tn_conf.l2_tunnel_type = node->key.l2_tn_type; l2_tn_conf.tunnel_id = node->key.tn_id; l2_tn_conf.pool = node->pool; - (void)ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_conf, TRUE); + (void)ixgbe_dev_l2_tunnel_filter_add(adapter, + &l2_tn_conf, TRUE); } } @@ -8378,11 +8384,13 @@ ixgbe_l2_tn_filter_restore(struct rte_eth_dev *dev) static inline void ixgbe_rss_filter_restore(struct rte_eth_dev *dev) { + struct ixgbe_adapter *adapter = + IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ixgbe_filter_info *filter_info = - IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter); if (filter_info->rss_info.conf.queue_num) - ixgbe_config_rss_filter(dev, + ixgbe_config_rss_filter(adapter, &filter_info->rss_info, TRUE); } @@ -8419,12 +8427,14 @@ ixgbe_l2_tunnel_conf(struct rte_eth_dev *dev) void ixgbe_clear_all_ntuple_filter(struct rte_eth_dev *dev) { + struct ixgbe_adapter *adapter = + IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ixgbe_filter_info *filter_info = IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); struct ixgbe_5tuple_filter *p_5tuple; while ((p_5tuple = TAILQ_FIRST(&filter_info->fivetuple_list))) - ixgbe_remove_5tuple_filter(dev, p_5tuple); + ixgbe_remove_5tuple_filter(adapter, p_5tuple); } /* remove all the ether type filters */ @@ -8468,6 +8478,7 @@ ixgbe_clear_syn_filter(struct rte_eth_dev *dev) int ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev) { + struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev); struct ixgbe_l2_tn_info *l2_tn_info = IXGBE_DEV_PRIVATE_TO_L2_TN_INFO(dev->data->dev_private); struct ixgbe_l2_tn_filter *l2_tn_filter; @@ -8478,7 +8489,7 @@ ixgbe_clear_all_l2_tn_filter(struct rte_eth_dev *dev) l2_tn_conf.l2_tunnel_type = l2_tn_filter->key.l2_tn_type; l2_tn_conf.tunnel_id = l2_tn_filter->key.tn_id; l2_tn_conf.pool = l2_tn_filter->pool; - ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_conf); + ret = ixgbe_dev_l2_tunnel_filter_del(adapter, &l2_tn_conf); if (ret < 0) return ret; } diff --git a/drivers/net/intel/ixgbe/ixgbe_ethdev.h b/drivers/net/intel/ixgbe/ixgbe_ethdev.h index 04e5e014d9..8eae6fc21f 100644 --- a/drivers/net/intel/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/intel/ixgbe/ixgbe_ethdev.h @@ -516,6 +516,9 @@ uint16_t ixgbe_vf_representor_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts #define IXGBE_DEV_FDIR_CONF(dev) \ (&((struct ixgbe_adapter *)(dev)->data->dev_private)->fdir_conf) +#define IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter) \ + (&(adapter)->fdir_conf) + #define IXGBE_DEV_PRIVATE_TO_ADAPTER(adapter) \ ((struct ixgbe_adapter *)adapter) @@ -663,13 +666,13 @@ uint32_t ixgbe_rssrk_reg_get(enum ixgbe_mac_type mac_type, uint8_t i); bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type); -int ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev, +int ixgbe_add_del_ntuple_filter(struct ixgbe_adapter *adapter, struct rte_eth_ntuple_filter *filter, bool add); -int ixgbe_add_del_ethertype_filter(struct rte_eth_dev *dev, +int ixgbe_add_del_ethertype_filter(struct ixgbe_adapter *adapter, struct rte_eth_ethertype_filter *filter, bool add); -int ixgbe_syn_filter_set(struct rte_eth_dev *dev, +int ixgbe_syn_filter_set(struct ixgbe_adapter *adapter, struct rte_eth_syn_filter *filter, bool add); @@ -685,22 +688,22 @@ struct ixgbe_l2_tunnel_conf { }; int -ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev, +ixgbe_dev_l2_tunnel_filter_add(struct ixgbe_adapter *adapter, struct ixgbe_l2_tunnel_conf *l2_tunnel, bool restore); int -ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev, +ixgbe_dev_l2_tunnel_filter_del(struct ixgbe_adapter *adapter, struct ixgbe_l2_tunnel_conf *l2_tunnel); void ixgbe_filterlist_init(void); void ixgbe_filterlist_flush(void); /* * Flow director function prototypes */ -int ixgbe_fdir_configure(struct rte_eth_dev *dev); -int ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev); -int ixgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev, +int ixgbe_fdir_configure(struct ixgbe_adapter *adapter); +int ixgbe_fdir_set_input_mask(struct ixgbe_adapter *adapter); +int ixgbe_fdir_set_flexbytes_offset(struct ixgbe_adapter *adapter, uint16_t offset); -int ixgbe_fdir_filter_program(struct rte_eth_dev *dev, +int ixgbe_fdir_filter_program(struct ixgbe_adapter *adapter, struct ixgbe_fdir_rule *rule, bool del, bool update); void ixgbe_fdir_info_get(struct rte_eth_dev *dev, @@ -760,7 +763,7 @@ int ixgbe_rss_conf_init(struct ixgbe_rte_flow_rss_conf *out, const struct rte_flow_action_rss *in); int ixgbe_action_rss_same(const struct rte_flow_action_rss *comp, const struct rte_flow_action_rss *with); -int ixgbe_config_rss_filter(struct rte_eth_dev *dev, +int ixgbe_config_rss_filter(struct ixgbe_adapter *adapter, struct ixgbe_rte_flow_rss_conf *conf, bool add); void ixgbe_dev_macsec_register_enable(struct rte_eth_dev *dev, diff --git a/drivers/net/intel/ixgbe/ixgbe_fdir.c b/drivers/net/intel/ixgbe/ixgbe_fdir.c index e07c2adbda..2ab94e98aa 100644 --- a/drivers/net/intel/ixgbe/ixgbe_fdir.c +++ b/drivers/net/intel/ixgbe/ixgbe_fdir.c @@ -79,11 +79,11 @@ #define IXGBE_FDIRIP6M_INNER_MAC_SHIFT 4 static int fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash); -static int fdir_set_input_mask(struct rte_eth_dev *dev, +static int fdir_set_input_mask(struct ixgbe_adapter *adapter, const struct rte_eth_fdir_masks *input_mask); -static int fdir_set_input_mask_82599(struct rte_eth_dev *dev); -static int fdir_set_input_mask_x550(struct rte_eth_dev *dev); -static int ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev, +static int fdir_set_input_mask_82599(struct ixgbe_adapter *adapter); +static int fdir_set_input_mask_x550(struct ixgbe_adapter *adapter); +static int ixgbe_set_fdir_flex_conf(struct ixgbe_adapter *adapter, const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl); static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl); static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input, @@ -250,12 +250,13 @@ reverse_fdir_bitmasks(uint16_t hi_dword, uint16_t lo_dword) * but makes use of the rte_fdir_masks structure to see which bits to set. */ static int -fdir_set_input_mask_82599(struct rte_eth_dev *dev) +fdir_set_input_mask_82599(struct ixgbe_adapter *adapter) { - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); + struct rte_eth_fdir_conf *fdir_conf = + IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter); struct ixgbe_hw_fdir_info *info = - IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter); /* * mask VM pool and DIPv6 since there are currently not supported * mask FLEX byte, it will be set in flex_conf @@ -335,12 +336,13 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev) * but makes use of the rte_fdir_masks structure to see which bits to set. */ static int -fdir_set_input_mask_x550(struct rte_eth_dev *dev) +fdir_set_input_mask_x550(struct ixgbe_adapter *adapter) { - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); + struct rte_eth_fdir_conf *fdir_conf = + IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter); struct ixgbe_hw_fdir_info *info = - IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter); /* mask VM pool and DIPv6 since there are currently not supported * mask FLEX byte, it will be set in flex_conf */ @@ -428,11 +430,11 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev) } static int -ixgbe_fdir_store_input_mask_82599(struct rte_eth_dev *dev, +ixgbe_fdir_store_input_mask_82599(struct ixgbe_adapter *adapter, const struct rte_eth_fdir_masks *input_mask) { struct ixgbe_hw_fdir_info *info = - IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter); uint16_t dst_ipv6m = 0; uint16_t src_ipv6m = 0; @@ -451,11 +453,11 @@ ixgbe_fdir_store_input_mask_82599(struct rte_eth_dev *dev, } static int -ixgbe_fdir_store_input_mask_x550(struct rte_eth_dev *dev, +ixgbe_fdir_store_input_mask_x550(struct ixgbe_adapter *adapter, const struct rte_eth_fdir_masks *input_mask) { struct ixgbe_hw_fdir_info *info = - IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter); memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask)); info->mask.vlan_tci_mask = input_mask->vlan_tci_mask; @@ -467,47 +469,49 @@ ixgbe_fdir_store_input_mask_x550(struct rte_eth_dev *dev, } static int -ixgbe_fdir_store_input_mask(struct rte_eth_dev *dev, +ixgbe_fdir_store_input_mask(struct ixgbe_adapter *adapter, const struct rte_eth_fdir_masks *input_mask) { - struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + struct rte_eth_fdir_conf *fdir_conf = + IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter); enum rte_fdir_mode mode = fdir_conf->mode; if (mode >= RTE_FDIR_MODE_SIGNATURE && mode <= RTE_FDIR_MODE_PERFECT) - return ixgbe_fdir_store_input_mask_82599(dev, input_mask); + return ixgbe_fdir_store_input_mask_82599(adapter, input_mask); else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN && mode <= RTE_FDIR_MODE_PERFECT_TUNNEL) - return ixgbe_fdir_store_input_mask_x550(dev, input_mask); + return ixgbe_fdir_store_input_mask_x550(adapter, input_mask); PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode); return -ENOTSUP; } int -ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev) +ixgbe_fdir_set_input_mask(struct ixgbe_adapter *adapter) { - struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + struct rte_eth_fdir_conf *fdir_conf = + IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter); enum rte_fdir_mode mode = fdir_conf->mode; if (mode >= RTE_FDIR_MODE_SIGNATURE && mode <= RTE_FDIR_MODE_PERFECT) - return fdir_set_input_mask_82599(dev); + return fdir_set_input_mask_82599(adapter); else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN && mode <= RTE_FDIR_MODE_PERFECT_TUNNEL) - return fdir_set_input_mask_x550(dev); + return fdir_set_input_mask_x550(adapter); PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode); return -ENOTSUP; } int -ixgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev, +ixgbe_fdir_set_flexbytes_offset(struct ixgbe_adapter *adapter, uint16_t offset) { - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); struct ixgbe_hw_fdir_info *fdir_info = - IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter); uint32_t fdirctrl; int i; @@ -556,16 +560,16 @@ ixgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev, } static int -fdir_set_input_mask(struct rte_eth_dev *dev, +fdir_set_input_mask(struct ixgbe_adapter *adapter, const struct rte_eth_fdir_masks *input_mask) { int ret; - ret = ixgbe_fdir_store_input_mask(dev, input_mask); + ret = ixgbe_fdir_store_input_mask(adapter, input_mask); if (ret) return ret; - return ixgbe_fdir_set_input_mask(dev); + return ixgbe_fdir_set_input_mask(adapter); } /* @@ -573,12 +577,12 @@ fdir_set_input_mask(struct rte_eth_dev *dev, * arguments are valid */ static int -ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev, +ixgbe_set_fdir_flex_conf(struct ixgbe_adapter *adapter, const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl) { - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); struct ixgbe_hw_fdir_info *info = - IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter); const struct rte_eth_flex_payload_cfg *flex_cfg; const struct rte_eth_fdir_flex_mask *flex_mask; uint32_t fdirm; @@ -636,10 +640,11 @@ ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev, } int -ixgbe_fdir_configure(struct rte_eth_dev *dev) +ixgbe_fdir_configure(struct ixgbe_adapter *adapter) { - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); + struct rte_eth_fdir_conf *fdir_conf = + IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter); int err; uint32_t fdirctrl, pbsize; int i; @@ -686,12 +691,12 @@ ixgbe_fdir_configure(struct rte_eth_dev *dev) for (i = 1; i < 8; i++) IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0); - err = fdir_set_input_mask(dev, &fdir_conf->mask); + err = fdir_set_input_mask(adapter, &fdir_conf->mask); if (err < 0) { PMD_INIT_LOG(ERR, " Error on setting FD mask"); return err; } - err = ixgbe_set_fdir_flex_conf(dev, &fdir_conf->flex_conf, + err = ixgbe_set_fdir_flex_conf(adapter, &fdir_conf->flex_conf, &fdirctrl); if (err < 0) { PMD_INIT_LOG(ERR, " Error on setting FD flexible arguments."); @@ -1114,20 +1119,21 @@ ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info, } int -ixgbe_fdir_filter_program(struct rte_eth_dev *dev, +ixgbe_fdir_filter_program(struct ixgbe_adapter *adapter, struct ixgbe_fdir_rule *rule, bool del, bool update) { - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); + struct rte_eth_fdir_conf *fdir_conf = + IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter); uint32_t fdircmd_flags; uint32_t fdirhash; uint8_t queue; bool is_perfect = FALSE; int err; struct ixgbe_hw_fdir_info *info = - IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter); enum rte_fdir_mode fdir_mode = fdir_conf->mode; struct ixgbe_fdir_filter *node; bool add_node = FALSE; diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index c40ef02f23..be92e3cdf3 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -2838,6 +2838,7 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev, { int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); fdir_conf->drop_queue = IXGBE_FDIR_DROP_QUEUE; @@ -2871,7 +2872,7 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev, if (fdir_conf->mode == RTE_FDIR_MODE_NONE) { fdir_conf->mode = rule->mode; - ret = ixgbe_fdir_configure(dev); + ret = ixgbe_fdir_configure(adapter); if (ret) { fdir_conf->mode = RTE_FDIR_MODE_NONE; return ret; @@ -3004,11 +3005,13 @@ ixgbe_parse_rss_filter(struct rte_eth_dev *dev, static void ixgbe_clear_rss_filter(struct rte_eth_dev *dev) { + struct ixgbe_adapter *adapter = + IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ixgbe_filter_info *filter_info = IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); if (filter_info->rss_info.conf.queue_num) - ixgbe_config_rss_filter(dev, &filter_info->rss_info, FALSE); + ixgbe_config_rss_filter(adapter, &filter_info->rss_info, FALSE); } void @@ -3099,13 +3102,15 @@ ixgbe_flow_create(struct rte_eth_dev *dev, struct rte_flow_error *error) { int ret; + struct ixgbe_adapter *adapter = + IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct rte_eth_ntuple_filter ntuple_filter; struct rte_eth_ethertype_filter ethertype_filter; struct rte_eth_syn_filter syn_filter; struct ixgbe_fdir_rule fdir_rule; struct ixgbe_l2_tunnel_conf l2_tn_filter; struct ixgbe_hw_fdir_info *fdir_info = - IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter); struct ixgbe_rte_flow_rss_conf rss_conf; struct rte_flow *flow = NULL; struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr; @@ -3147,7 +3152,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, actions, &ntuple_filter, error); if (!ret) { - ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, TRUE); + ret = ixgbe_add_del_ntuple_filter(adapter, &ntuple_filter, TRUE); if (!ret) { ntuple_filter_ptr = rte_zmalloc("ixgbe_ntuple_filter", sizeof(struct ixgbe_ntuple_filter_ele), 0); @@ -3171,7 +3176,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, ret = ixgbe_parse_ethertype_filter(dev, attr, pattern, actions, ðertype_filter, error); if (!ret) { - ret = ixgbe_add_del_ethertype_filter(dev, + ret = ixgbe_add_del_ethertype_filter(adapter, ðertype_filter, TRUE); if (!ret) { ethertype_filter_ptr = rte_zmalloc( @@ -3197,7 +3202,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, ret = ixgbe_parse_syn_filter(dev, attr, pattern, actions, &syn_filter, error); if (!ret) { - ret = ixgbe_syn_filter_set(dev, &syn_filter, TRUE); + ret = ixgbe_syn_filter_set(adapter, &syn_filter, TRUE); if (!ret) { syn_filter_ptr = rte_zmalloc("ixgbe_syn_filter", sizeof(struct ixgbe_eth_syn_filter_ele), 0); @@ -3229,12 +3234,12 @@ ixgbe_flow_create(struct rte_eth_dev *dev, *&fdir_info->mask = *&fdir_rule.mask; if (fdir_rule.mask.flex_bytes_mask) { - ret = ixgbe_fdir_set_flexbytes_offset(dev, + ret = ixgbe_fdir_set_flexbytes_offset(adapter, fdir_rule.flex_bytes_offset); if (ret) goto out; } - ret = ixgbe_fdir_set_input_mask(dev); + ret = ixgbe_fdir_set_input_mask(adapter); if (ret) goto out; @@ -3259,7 +3264,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, } if (fdir_rule.b_spec) { - ret = ixgbe_fdir_filter_program(dev, &fdir_rule, + ret = ixgbe_fdir_filter_program(adapter, &fdir_rule, FALSE, FALSE); if (!ret) { fdir_rule_ptr = rte_zmalloc("ixgbe_fdir_filter", @@ -3297,7 +3302,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, ret = ixgbe_parse_l2_tn_filter(dev, attr, pattern, actions, &l2_tn_filter, error); if (!ret) { - ret = ixgbe_dev_l2_tunnel_filter_add(dev, &l2_tn_filter, FALSE); + ret = ixgbe_dev_l2_tunnel_filter_add(adapter, &l2_tn_filter, FALSE); if (!ret) { l2_tn_filter_ptr = rte_zmalloc("ixgbe_l2_tn_filter", sizeof(struct ixgbe_eth_l2_tunnel_conf_ele), 0); @@ -3320,7 +3325,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, ret = ixgbe_parse_rss_filter(dev, attr, actions, &rss_conf, error); if (!ret) { - ret = ixgbe_config_rss_filter(dev, &rss_conf, TRUE); + ret = ixgbe_config_rss_filter(adapter, &rss_conf, TRUE); if (!ret) { rss_filter_ptr = rte_zmalloc("ixgbe_rss_filter", sizeof(struct ixgbe_rss_conf_ele), 0); @@ -3420,6 +3425,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, struct rte_flow_error *error) { int ret; + struct ixgbe_adapter *adapter = + IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct rte_flow *pmd_flow = flow; enum rte_filter_type filter_type = pmd_flow->filter_type; struct rte_eth_ntuple_filter ntuple_filter; @@ -3434,7 +3441,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, struct ixgbe_fdir_rule_ele *fdir_rule_ptr; struct ixgbe_flow_mem *ixgbe_flow_mem_ptr; struct ixgbe_hw_fdir_info *fdir_info = - IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FDIR_INFO(adapter); struct ixgbe_rss_conf_ele *rss_filter_ptr; /* Special case for SECURITY flows */ @@ -3450,7 +3457,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, rte_memcpy(&ntuple_filter, &ntuple_filter_ptr->filter_info, sizeof(struct rte_eth_ntuple_filter)); - ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE); + ret = ixgbe_add_del_ntuple_filter(adapter, &ntuple_filter, FALSE); if (!ret) { TAILQ_REMOVE(&filter_ntuple_list, ntuple_filter_ptr, entries); @@ -3463,7 +3470,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, rte_memcpy(ðertype_filter, ðertype_filter_ptr->filter_info, sizeof(struct rte_eth_ethertype_filter)); - ret = ixgbe_add_del_ethertype_filter(dev, + ret = ixgbe_add_del_ethertype_filter(adapter, ðertype_filter, FALSE); if (!ret) { TAILQ_REMOVE(&filter_ethertype_list, @@ -3477,7 +3484,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, rte_memcpy(&syn_filter, &syn_filter_ptr->filter_info, sizeof(struct rte_eth_syn_filter)); - ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE); + ret = ixgbe_syn_filter_set(adapter, &syn_filter, FALSE); if (!ret) { TAILQ_REMOVE(&filter_syn_list, syn_filter_ptr, entries); @@ -3489,7 +3496,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, rte_memcpy(&fdir_rule, &fdir_rule_ptr->filter_info, sizeof(struct ixgbe_fdir_rule)); - ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE); + ret = ixgbe_fdir_filter_program(adapter, &fdir_rule, TRUE, FALSE); if (!ret) { TAILQ_REMOVE(&filter_fdir_list, fdir_rule_ptr, entries); @@ -3503,7 +3510,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, pmd_flow->rule; rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info, sizeof(struct ixgbe_l2_tunnel_conf)); - ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter); + ret = ixgbe_dev_l2_tunnel_filter_del(adapter, &l2_tn_filter); if (!ret) { TAILQ_REMOVE(&filter_l2_tunnel_list, l2_tn_filter_ptr, entries); @@ -3513,7 +3520,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, case RTE_ETH_FILTER_HASH: rss_filter_ptr = (struct ixgbe_rss_conf_ele *) pmd_flow->rule; - ret = ixgbe_config_rss_filter(dev, + ret = ixgbe_config_rss_filter(adapter, &rss_filter_ptr->filter_info, FALSE); if (!ret) { TAILQ_REMOVE(&filter_rss_list, diff --git a/drivers/net/intel/ixgbe/ixgbe_rxtx.c b/drivers/net/intel/ixgbe/ixgbe_rxtx.c index 3be0f0492a..60222693fe 100644 --- a/drivers/net/intel/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/intel/ixgbe/ixgbe_rxtx.c @@ -6132,7 +6132,7 @@ ixgbe_action_rss_same(const struct rte_flow_action_rss *comp, } int -ixgbe_config_rss_filter(struct rte_eth_dev *dev, +ixgbe_config_rss_filter(struct ixgbe_adapter *adapter, struct ixgbe_rte_flow_rss_conf *conf, bool add) { struct ixgbe_hw *hw; @@ -6148,17 +6148,17 @@ ixgbe_config_rss_filter(struct rte_eth_dev *dev, .rss_hf = conf->conf.types, }; struct ixgbe_filter_info *filter_info = - IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(adapter); PMD_INIT_FUNC_TRACE(); - hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + hw = IXGBE_DEV_PRIVATE_TO_HW(adapter); sp_reta_size = ixgbe_reta_size_get(hw->mac.type); if (!add) { if (ixgbe_action_rss_same(&filter_info->rss_info.conf, &conf->conf)) { - ixgbe_rss_disable(dev); + ixgbe_mrqc_rss_remove(hw); memset(&filter_info->rss_info, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); return 0; @@ -6188,7 +6188,7 @@ ixgbe_config_rss_filter(struct rte_eth_dev *dev, * the RSS hash of input packets. */ if ((rss_conf.rss_hf & IXGBE_RSS_OFFLOAD_ALL) == 0) { - ixgbe_rss_disable(dev); + ixgbe_mrqc_rss_remove(hw); return 0; } if (rss_conf.rss_key == NULL) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 05/29] net/intel/common: add common flow action parsing 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (3 preceding siblings ...) 2026-04-10 13:12 ` [PATCH v3 04/29] net/ixgbe: use adapter in flow-related calls Anatoly Burakov @ 2026-04-10 13:12 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 06/29] net/intel/common: add common flow attr validation Anatoly Burakov ` (23 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:12 UTC (permalink / raw) To: dev, Bruce Richardson Currently, each driver has their own code for action parsing, which results in a lot of duplication and subtle mismatches in behavior between drivers. Add common infrastructure, based on the following assumptions: - All drivers support at most 4 actions at once, but usually less - Not every action is supported by all drivers - We can check a few common things to filter out obviously wrong actions - Driver performs semantic checks on all valid actions So, the intention is to reject everything we can reasonably reject at the outset without knowing anything about the drivers, parametrize what is trivial to parametrize, and leave the rest for the driver to implement. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/common/flow_check.h | 237 ++++++++++++++++++++++++++ 1 file changed, 237 insertions(+) create mode 100644 drivers/net/intel/common/flow_check.h diff --git a/drivers/net/intel/common/flow_check.h b/drivers/net/intel/common/flow_check.h new file mode 100644 index 0000000000..ad91e09f67 --- /dev/null +++ b/drivers/net/intel/common/flow_check.h @@ -0,0 +1,237 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2025 Intel Corporation + */ + +#ifndef _COMMON_INTEL_FLOW_CHECK_H_ +#define _COMMON_INTEL_FLOW_CHECK_H_ + +#include <bus_pci_driver.h> +#include <ethdev_driver.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/* + * Common attr and action validation code for Intel drivers. + */ + +/** + * Maximum number of actions that can be stored in a parsed action list. + */ +#define CI_FLOW_PARSED_ACTIONS_MAX 4 + +/* Actions that are reasonably expected to have a conf structure */ +static const enum rte_flow_action_type need_conf[] = { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_VF, + RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR, + RTE_FLOW_ACTION_TYPE_COUNT, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_SECURITY, + RTE_FLOW_ACTION_TYPE_END +}; + +/** + * Is action type in this list of action types? + */ +__rte_internal +static inline bool +ci_flow_action_type_in_list(const enum rte_flow_action_type type, + const enum rte_flow_action_type list[]) +{ + size_t i = 0; + while (list[i] != RTE_FLOW_ACTION_TYPE_END) { + if (type == list[i]) + return true; + i++; + } + return false; +} + +/* Forward declarations */ +struct ci_flow_actions; +struct ci_flow_actions_check_param; + +/** + * Driver-specific action list validation callback. + * + * Performs driver-specific validation of action parameter list. + * Called after all actions have been parsed and added to the list, + * allowing validation based on the complete action set. + * + * @param actions + * The complete list of parsed actions (for context-dependent validation). + * @param driver_ctx + * Opaque driver context (e.g., adapter/queue configuration). + * @param error + * Pointer to rte_flow_error for reporting failures. + * @return + * 0 on success, negative errno on failure. + */ +typedef int (*ci_flow_actions_check_fn)(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error); + +/** + * List of actions that we know we've validated. + */ +struct ci_flow_actions { + /* Number of actions in the list. */ + uint8_t count; + /* Parsed actions array. */ + struct rte_flow_action const *actions[CI_FLOW_PARSED_ACTIONS_MAX]; +}; + +/** + * Parameters for action list validation. Any element can be NULL/0 as checks are only performed + * against constraints specified. + */ +struct ci_flow_actions_check_param { + /** + * Driver-specific context pointer (e.g., adapter/queue configuration). Can be NULL. + */ + void *driver_ctx; + /** + * Driver-specific action list validation callback. Can be NULL. + */ + ci_flow_actions_check_fn check; + /** + * Allowed action types for this parse parameter. Must be terminated with + * RTE_FLOW_ACTION_TYPE_END. Can be NULL. + */ + const enum rte_flow_action_type *allowed_types; + size_t max_actions; /**< Maximum number of actions allowed. */ +}; + +static inline int +__flow_action_check_rss(const struct rte_flow_action_rss *rss, struct rte_flow_error *error) +{ + size_t q, q_num = rss->queue_num; + if ((q_num == 0) == (rss->queue == NULL)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "If queue number is specified, queue array must also be specified"); + } + for (q = 1; q < q_num; q++) { + uint16_t qi = rss->queue[q]; + if (rss->queue[q - 1] + 1 != qi) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS queues must be contiguous"); + } + } + /* either we have both key and key length, or we have neither */ + if ((rss->key_len == 0) != (rss->key == NULL)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "If RSS key is specified, key length must also be specified"); + } + return 0; +} + +static inline int +__flow_action_check_generic(const struct rte_flow_action *action, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + /* is this action in our allowed list? */ + if (param != NULL && param->allowed_types != NULL && + !ci_flow_action_type_in_list(action->type, param->allowed_types)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Unsupported action"); + } + /* do we need to validate presence of conf? */ + if (ci_flow_action_type_in_list(action->type, need_conf)) { + if (action->conf == NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, action, + "Action requires configuration"); + } + } + + /* type-specific validation */ + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = + (const struct rte_flow_action_rss *)action->conf; + if (__flow_action_check_rss(rss, error) < 0) + return -rte_errno; + break; + } + default: + /* no specific validation */ + break; + } + + return 0; +} + +/** + * Validate and parse a list of rte_flow_action into a parsed action list. + * + * @param actions pointer to array of rte_flow_action, terminated by RTE_FLOW_ACTION_TYPE_END + * @param param pointer to ci_flow_actions_check_param structure (can be NULL) + * @param parsed_actions pointer to ci_flow_actions structure to store parsed actions + * @param error pointer to rte_flow_error structure for error reporting + * + * @return 0 on success, negative errno on failure. + */ +__rte_internal +static inline int +ci_flow_check_actions(const struct rte_flow_action *actions, + const struct ci_flow_actions_check_param *param, + struct ci_flow_actions *parsed_actions, + struct rte_flow_error *error) +{ + size_t i = 0; + + if (actions == NULL) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Missing actions"); + } + + /* reset the list */ + *parsed_actions = (struct ci_flow_actions){0}; + + while (actions[i].type != RTE_FLOW_ACTION_TYPE_END) { + const struct rte_flow_action *action = &actions[i++]; + + /* skip VOID actions */ + if (action->type == RTE_FLOW_ACTION_TYPE_VOID) + continue; + + /* generic validation for actions - this will check against param as well */ + if (__flow_action_check_generic(action, param, error) < 0) + return -rte_errno; + + /* add action to the list */ + if (parsed_actions->count >= RTE_DIM(parsed_actions->actions)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Too many actions"); + } + /* user may have specified a maximum number of actions */ + if (param != NULL && param->max_actions != 0 && + parsed_actions->count >= param->max_actions) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Too many actions"); + } + parsed_actions->actions[parsed_actions->count++] = action; + } + + /* now, call into user validation if specified */ + if (param != NULL && param->check != NULL) { + if (param->check(parsed_actions, param, error) < 0) + return -rte_errno; + } + /* if we didn't parse anything, valid action list is empty */ + return parsed_actions->count == 0 ? -EINVAL : 0; +} + +#ifdef __cplusplus +} +#endif + +#endif /* _COMMON_INTEL_FLOW_CHECK_H_ */ -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 06/29] net/intel/common: add common flow attr validation 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (4 preceding siblings ...) 2026-04-10 13:12 ` [PATCH v3 05/29] net/intel/common: add common flow action parsing Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 07/29] net/ixgbe: use common checks in ethertype filter Anatoly Burakov ` (22 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Bruce Richardson There are a lot of commonalities between what kinds of flow attr each Intel driver supports. Add a helper function that will validate attr based on common requirements and (optional) parameter checks. Things we check for: - Rejecting NULL attr (obviously) - Default to ingress flows - Transfer, group, priority, and egress are not allowed unless requested Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/common/flow_check.h | 70 +++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/drivers/net/intel/common/flow_check.h b/drivers/net/intel/common/flow_check.h index ad91e09f67..134bb2303c 100644 --- a/drivers/net/intel/common/flow_check.h +++ b/drivers/net/intel/common/flow_check.h @@ -53,6 +53,7 @@ ci_flow_action_type_in_list(const enum rte_flow_action_type type, /* Forward declarations */ struct ci_flow_actions; struct ci_flow_actions_check_param; +struct ci_flow_attr_check_param; /** * Driver-specific action list validation callback. @@ -230,6 +231,75 @@ ci_flow_check_actions(const struct rte_flow_action *actions, return parsed_actions->count == 0 ? -EINVAL : 0; } +/** + * Parameter structure for attr check. + */ +struct ci_flow_attr_check_param { + bool allow_priority; /**< True if priority attribute is allowed. */ + bool allow_transfer; /**< True if transfer attribute is allowed. */ + bool allow_group; /**< True if group attribute is allowed. */ + bool expect_egress; /**< True if egress attribute is expected. */ +}; + +/** + * Validate rte_flow_attr structure against specified constraints. + * + * @param attr Pointer to rte_flow_attr structure to validate. + * @param attr_param Pointer to ci_flow_attr_check_param structure specifying constraints. + * @param error Pointer to rte_flow_error structure for error reporting. + * + * @return 0 on success, negative errno on failure. + */ +__rte_internal +static inline int +ci_flow_check_attr(const struct rte_flow_attr *attr, + const struct ci_flow_attr_check_param *attr_param, + struct rte_flow_error *error) +{ + if (attr == NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, attr, + "NULL attribute"); + } + + /* Direction must be either ingress or egress */ + if (attr->ingress == attr->egress) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, attr, + "Either ingress or egress must be set"); + } + + /* Expect ingress by default */ + if (attr->egress && (attr_param == NULL || !attr_param->expect_egress)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, attr, + "Egress not supported"); + } + + /* May not be supported */ + if (attr->transfer && (attr_param == NULL || !attr_param->allow_transfer)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, attr, + "Transfer not supported"); + } + + /* May not be supported */ + if (attr->group && (attr_param == NULL || !attr_param->allow_group)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, attr, + "Group not supported"); + } + + /* May not be supported */ + if (attr->priority && (attr_param == NULL || !attr_param->allow_priority)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, attr, + "Priority not supported"); + } + + return 0; +} + #ifdef __cplusplus } #endif -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 07/29] net/ixgbe: use common checks in ethertype filter 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (5 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 06/29] net/intel/common: add common flow attr validation Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 08/29] net/ixgbe: use common checks in syn filter Anatoly Burakov ` (21 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in ethertype. This allows us to remove some checks as they are no longer necessary (such as whether DROP flag was set - if we do not accept DROP actions, we do not set the DROP flag), as well as make some checks more stringent (such as rejecting more than one action). Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 190 ++++++++++----------------- 1 file changed, 73 insertions(+), 117 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index be92e3cdf3..536f50a476 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -43,6 +43,8 @@ #include "base/ixgbe_phy.h" #include "rte_pmd_ixgbe.h" +#include "../common/flow_check.h" + #define IXGBE_MIN_N_TUPLE_PRIO 1 #define IXGBE_MAX_N_TUPLE_PRIO 7 @@ -133,6 +135,41 @@ const struct rte_flow_action *next_no_void_action( } } +/* + * All ixgbe engines mostly check the same stuff, so use a common check. + */ +static int +ixgbe_flow_actions_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action *action; + struct rte_eth_dev_data *dev_data = param->driver_ctx; + size_t idx; + + for (idx = 0; idx < actions->count; idx++) { + action = actions->actions[idx]; + + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *queue = action->conf; + if (queue->index >= dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "queue index out of range"); + } + break; + } + default: + /* no specific validation */ + break; + } + } + return 0; +} + /** * Please be aware there's an assumption for all the parsers. * rte_flow_item is using big endian, rte_flow_attr and @@ -743,38 +780,14 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev, * item->last should be NULL. */ static int -cons_parse_ethertype_filter(const struct rte_flow_attr *attr, - const struct rte_flow_item *pattern, - const struct rte_flow_action *actions, - struct rte_eth_ethertype_filter *filter, - struct rte_flow_error *error) +cons_parse_ethertype_filter(const struct rte_flow_item *pattern, + const struct rte_flow_action *action, + struct rte_eth_ethertype_filter *filter, + struct rte_flow_error *error) { const struct rte_flow_item *item; - const struct rte_flow_action *act; const struct rte_flow_item_eth *eth_spec; const struct rte_flow_item_eth *eth_mask; - const struct rte_flow_action_queue *act_q; - - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } item = next_no_void_pattern(pattern, NULL); /* The first non-void item should be MAC. */ @@ -844,87 +857,30 @@ cons_parse_ethertype_filter(const struct rte_flow_attr *attr, return -rte_errno; } - /* Parse action */ - - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE && - act->type != RTE_FLOW_ACTION_TYPE_DROP) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = (const struct rte_flow_action_queue *)act->conf; - filter->queue = act_q->index; - } else { - filter->flags |= RTE_ETHTYPE_FLAGS_DROP; - } - - /* Check if the next non-void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* Parse attr */ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->transfer) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - /* Not supported */ - if (attr->priority) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "Not support group."); - return -rte_errno; - } + filter->queue = ((const struct rte_flow_action_queue *)action->conf)->index; return 0; } static int -ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_eth_ethertype_filter *filter, - struct rte_flow_error *error) +ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_eth_ethertype_filter *filter, struct rte_flow_error *error) { int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only queue is allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .driver_ctx = dev->data, + .check = ixgbe_flow_actions_check + }; + const struct rte_flow_action *action; if (hw->mac.type != ixgbe_mac_82599EB && hw->mac.type != ixgbe_mac_X540 && @@ -934,19 +890,27 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, hw->mac.type != ixgbe_mac_E610) return -ENOTSUP; - ret = cons_parse_ethertype_filter(attr, pattern, - actions, filter, error); + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); if (ret) return ret; - if (filter->queue >= dev->data->nb_rx_queues) { - memset(filter, 0, sizeof(struct rte_eth_ethertype_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "queue index much too big"); - return -rte_errno; + /* only one action is supported */ + if (parsed_actions.count > 1) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + parsed_actions.actions[1], + "Only one action can be specified at a time"); } + action = parsed_actions.actions[0]; + + ret = cons_parse_ethertype_filter(pattern, action, filter, error); + if (ret) + return ret; if (filter->ether_type == RTE_ETHER_TYPE_IPV4 || filter->ether_type == RTE_ETHER_TYPE_IPV6) { @@ -965,14 +929,6 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) { - memset(filter, 0, sizeof(struct rte_eth_ethertype_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "drop option is unsupported"); - return -rte_errno; - } - return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 08/29] net/ixgbe: use common checks in syn filter 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (6 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 07/29] net/ixgbe: use common checks in ethertype filter Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 09/29] net/ixgbe: use common checks in L2 tunnel filter Anatoly Burakov ` (20 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in syn filter. Some checks have been rearranged or become more stringent due to using common infrastructure. In particular, group attr was ignored previously but is now explicitly rejected. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 130 +++++++-------------------- 1 file changed, 32 insertions(+), 98 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 536f50a476..f1321ac223 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -953,38 +953,13 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, const struct rte_flow_attr * item->last should be NULL. */ static int -cons_parse_syn_filter(const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_eth_syn_filter *filter, - struct rte_flow_error *error) +cons_parse_syn_filter(const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], + const struct rte_flow_action_queue *q_act, struct rte_eth_syn_filter *filter, + struct rte_flow_error *error) { const struct rte_flow_item *item; - const struct rte_flow_action *act; const struct rte_flow_item_tcp *tcp_spec; const struct rte_flow_item_tcp *tcp_mask; - const struct rte_flow_action_queue *act_q; - - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } /* the first not void item should be MAC or IPv4 or IPv6 or TCP */ @@ -1092,63 +1067,7 @@ cons_parse_syn_filter(const struct rte_flow_attr *attr, return -rte_errno; } - /* check if the first not void action is QUEUE. */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - act_q = (const struct rte_flow_action_queue *)act->conf; - filter->queue = act_q->index; - if (filter->queue >= IXGBE_MAX_RX_QUEUE_NUM) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(filter, 0, sizeof(struct rte_eth_syn_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } + filter->queue = q_act->index; /* Support 2 priorities, the lowest or highest. */ if (!attr->priority) { @@ -1159,7 +1078,7 @@ cons_parse_syn_filter(const struct rte_flow_attr *attr, memset(filter, 0, sizeof(struct rte_eth_syn_filter)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); + attr, "Priority can be 0 or 0xFFFFFFFF"); return -rte_errno; } @@ -1167,15 +1086,27 @@ cons_parse_syn_filter(const struct rte_flow_attr *attr, } static int -ixgbe_parse_syn_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_eth_syn_filter *filter, - struct rte_flow_error *error) +ixgbe_parse_syn_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], const struct rte_flow_action actions[], + struct rte_eth_syn_filter *filter, struct rte_flow_error *error) { int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only queue is allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev->data, + .check = ixgbe_flow_actions_check, + .max_actions = 1, + }; + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; + const struct rte_flow_action *action; if (hw->mac.type != ixgbe_mac_82599EB && hw->mac.type != ixgbe_mac_X540 && @@ -1185,16 +1116,19 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev, hw->mac.type != ixgbe_mac_E610) return -ENOTSUP; - ret = cons_parse_syn_filter(attr, pattern, - actions, filter, error); - - if (filter->queue >= dev->data->nb_rx_queues) - return -rte_errno; + /* validate attributes */ + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + return ret; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); if (ret) return ret; - return 0; + action = parsed_actions.actions[0]; + + return cons_parse_syn_filter(attr, pattern, action->conf, filter, error); } /** -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 09/29] net/ixgbe: use common checks in L2 tunnel filter 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (7 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 08/29] net/ixgbe: use common checks in syn filter Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 10/29] net/ixgbe: use common checks in ntuple filter Anatoly Burakov ` (19 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in L2 tunnel filter. Some checks have become more stringent as a result, in particular, group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 149 +++++++++------------------ 1 file changed, 48 insertions(+), 101 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index f1321ac223..af01f5e92e 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -145,6 +145,7 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions, { const struct rte_flow_action *action; struct rte_eth_dev_data *dev_data = param->driver_ctx; + struct ixgbe_adapter *ad = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev_data->dev_private); size_t idx; for (idx = 0; idx < actions->count; idx++) { @@ -162,6 +163,17 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions, } break; } + case RTE_FLOW_ACTION_TYPE_VF: + { + const struct rte_flow_action_vf *vf = action->conf; + if (vf->id >= ad->max_vfs) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + action, + "VF id out of range"); + } + break; + } default: /* no specific validation */ break; @@ -900,12 +912,6 @@ ixgbe_parse_ethertype_filter(struct rte_eth_dev *dev, const struct rte_flow_attr if (ret) return ret; - /* only one action is supported */ - if (parsed_actions.count > 1) { - return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - parsed_actions.actions[1], - "Only one action can be specified at a time"); - } action = parsed_actions.actions[0]; ret = cons_parse_ethertype_filter(pattern, action, filter, error); @@ -1151,40 +1157,16 @@ ixgbe_parse_syn_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr */ static int cons_parse_l2_tn_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct rte_flow_action *action, struct ixgbe_l2_tunnel_conf *filter, struct rte_flow_error *error) { const struct rte_flow_item *item; const struct rte_flow_item_e_tag *e_tag_spec; const struct rte_flow_item_e_tag *e_tag_mask; - const struct rte_flow_action *act; - const struct rte_flow_action_vf *act_vf; struct ixgbe_adapter *ad = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } - /* The first not void item should be e-tag. */ item = next_no_void_pattern(pattern, NULL); if (item->type != RTE_FLOW_ITEM_TYPE_E_TAG) { @@ -1242,71 +1224,13 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev, return -rte_errno; } - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - /* not supported */ - if (attr->priority) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* check if the first not void action is VF or PF. */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_VF && - act->type != RTE_FLOW_ACTION_TYPE_PF) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - if (act->type == RTE_FLOW_ACTION_TYPE_VF) { - act_vf = (const struct rte_flow_action_vf *)act->conf; + if (action->type == RTE_FLOW_ACTION_TYPE_VF) { + const struct rte_flow_action_vf *act_vf = action->conf; filter->pool = act_vf->id; } else { filter->pool = ad->max_vfs; } - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - return 0; } @@ -1318,29 +1242,52 @@ ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev, struct ixgbe_l2_tunnel_conf *l2_tn_filter, struct rte_flow_error *error) { + struct rte_eth_dev_data *dev_data = dev->data; + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev_data->dev_private); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only vf/pf is allowed here */ + RTE_FLOW_ACTION_TYPE_VF, + RTE_FLOW_ACTION_TYPE_PF, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev_data, + .check = ixgbe_flow_actions_check, + .max_actions = 1, + }; int ret = 0; - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ixgbe_adapter *ad = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - uint16_t vf_num; - - ret = cons_parse_l2_tn_filter(dev, attr, pattern, - actions, l2_tn_filter, error); + const struct rte_flow_action *action; if (hw->mac.type != ixgbe_mac_X550 && hw->mac.type != ixgbe_mac_X550EM_x && hw->mac.type != ixgbe_mac_X550EM_a && hw->mac.type != ixgbe_mac_E610) { - memset(l2_tn_filter, 0, sizeof(struct ixgbe_l2_tunnel_conf)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "Not supported by L2 tunnel filter"); return -rte_errno; } - vf_num = ad->max_vfs; + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; - if (l2_tn_filter->pool > vf_num) - return -rte_errno; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + + /* only one action is supported */ + if (parsed_actions.count > 1) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + parsed_actions.actions[1], + "Only one action can be specified at a time"); + } + action = parsed_actions.actions[0]; + + ret = cons_parse_l2_tn_filter(dev, pattern, action, l2_tn_filter, error); return ret; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 10/29] net/ixgbe: use common checks in ntuple filter 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (8 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 09/29] net/ixgbe: use common checks in L2 tunnel filter Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 11/29] net/ixgbe: use common checks in security filter Anatoly Burakov ` (18 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in ntuple filter. As a result, some checks have become more stringent, in particular the group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 133 ++++++++------------------- 1 file changed, 36 insertions(+), 97 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index af01f5e92e..f6a57af98b 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -219,12 +219,11 @@ ixgbe_flow_actions_check(const struct ci_flow_actions *actions, static int cons_parse_ntuple_filter(const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct rte_flow_action_queue *q_act, struct rte_eth_ntuple_filter *filter, struct rte_flow_error *error) { const struct rte_flow_item *item; - const struct rte_flow_action *act; const struct rte_flow_item_ipv4 *ipv4_spec; const struct rte_flow_item_ipv4 *ipv4_mask; const struct rte_flow_item_tcp *tcp_spec; @@ -240,24 +239,11 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr, struct rte_flow_item_eth eth_null; struct rte_flow_item_vlan vlan_null; - if (!pattern) { - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; + /* Priority must be 16-bit */ + if (attr->priority > UINT16_MAX) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, attr, + "Priority must be 16-bit"); } memset(ð_null, 0, sizeof(struct rte_flow_item_eth)); @@ -553,70 +539,11 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr, action: - /** - * n-tuple only supports forwarding, - * check if the first not void action is QUEUE. - */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - item, "Not supported action."); - return -rte_errno; - } - filter->queue = - ((const struct rte_flow_action_queue *)act->conf)->index; + filter->queue = q_act->index; - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - if (attr->priority > 0xFFFF) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Error priority."); - return -rte_errno; - } filter->priority = (uint16_t)attr->priority; - if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO || - attr->priority > IXGBE_MAX_N_TUPLE_PRIO) - filter->priority = 1; + if (attr->priority < IXGBE_MIN_N_TUPLE_PRIO || attr->priority > IXGBE_MAX_N_TUPLE_PRIO) + filter->priority = 1; return 0; } @@ -736,15 +663,40 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev, struct rte_eth_ntuple_filter *filter, struct rte_flow_error *error) { - int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only queue is allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev->data, + .check = ixgbe_flow_actions_check, + .max_actions = 1, + }; + const struct rte_flow_action *action; + int ret; if (hw->mac.type != ixgbe_mac_82599EB && hw->mac.type != ixgbe_mac_X540) return -ENOTSUP; - ret = cons_parse_ntuple_filter(attr, pattern, actions, filter, error); + /* validate attributes */ + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + return ret; + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + action = parsed_actions.actions[0]; + + ret = cons_parse_ntuple_filter(attr, pattern, action->conf, filter, error); if (ret) return ret; @@ -757,19 +709,6 @@ ixgbe_parse_ntuple_filter(struct rte_eth_dev *dev, return -rte_errno; } - /* Ixgbe doesn't support many priorities. */ - if (filter->priority < IXGBE_MIN_N_TUPLE_PRIO || - filter->priority > IXGBE_MAX_N_TUPLE_PRIO) { - memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "Priority not supported by ntuple filter"); - return -rte_errno; - } - - if (filter->queue >= dev->data->nb_rx_queues) - return -rte_errno; - /* fixed value for ixgbe */ filter->flags = RTE_5TUPLE_FLAGS; return 0; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 11/29] net/ixgbe: use common checks in security filter 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (9 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 10/29] net/ixgbe: use common checks in ntuple filter Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 12/29] net/ixgbe: use common checks in FDIR filters Anatoly Burakov ` (17 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in security filter. As a result, some checks have become more stringent. In particular, group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 62 ++++++++++------------------ 1 file changed, 22 insertions(+), 40 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index f6a57af98b..1da66083db 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -557,7 +557,16 @@ ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr const struct rte_flow_action_security *security; struct rte_security_session *session; const struct rte_flow_item *item; - const struct rte_flow_action *act; + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only security is allowed here */ + RTE_FLOW_ACTION_TYPE_SECURITY, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + }; + const struct rte_flow_action *action; struct ip_spec spec; int ret; @@ -569,45 +578,18 @@ ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr hw->mac.type != ixgbe_mac_E610) return -ENOTSUP; - if (pattern == NULL) { - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - if (actions == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - if (attr == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; - /* check if next non-void action is security */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_SECURITY) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - } - security = act->conf; - if (security == NULL) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "NULL security action config."); - } - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - } + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + + action = parsed_actions.actions[0]; + security = action->conf; /* get the IP pattern*/ item = next_no_void_pattern(pattern, NULL); @@ -647,7 +629,7 @@ ixgbe_parse_security_filter(struct rte_eth_dev *dev, const struct rte_flow_attr ret = ixgbe_crypto_add_ingress_sa_from_flow(session, &spec); if (ret) { rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ACTION, act, + RTE_FLOW_ERROR_TYPE_ACTION, action, "Failed to add security session."); return -rte_errno; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 12/29] net/ixgbe: use common checks in FDIR filters 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (10 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 11/29] net/ixgbe: use common checks in security filter Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 13/29] net/ixgbe: use common checks in RSS filter Anatoly Burakov ` (16 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in flow director filters (both tunnel and normal). As a result, some checks have become more stringent, in particular group attribute is now explicitly rejected instead of being ignored. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 292 ++++++++++++--------------- 1 file changed, 129 insertions(+), 163 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 1da66083db..4d7f9312a1 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -1213,111 +1213,6 @@ ixgbe_parse_l2_tn_filter(struct rte_eth_dev *dev, return ret; } -/* Parse to get the attr and action info of flow director rule. */ -static int -ixgbe_parse_fdir_act_attr(const struct rte_flow_attr *attr, - const struct rte_flow_action actions[], - struct ixgbe_fdir_rule *rule, - struct rte_flow_error *error) -{ - const struct rte_flow_action *act; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark; - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - /* not supported */ - if (attr->priority) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* check if the first not void action is QUEUE or DROP. */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE && - act->type != RTE_FLOW_ACTION_TYPE_DROP) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = (const struct rte_flow_action_queue *)act->conf; - rule->queue = act_q->index; - } else { /* drop */ - /* signature mode does not support drop action. */ - if (rule->mode == RTE_FDIR_MODE_SIGNATURE) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - rule->fdirflags = IXGBE_FDIRCMD_DROP; - } - - /* check if the next not void item is MARK */ - act = next_no_void_action(actions, act); - if ((act->type != RTE_FLOW_ACTION_TYPE_MARK) && - (act->type != RTE_FLOW_ACTION_TYPE_END)) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - rule->soft_id = 0; - - if (act->type == RTE_FLOW_ACTION_TYPE_MARK) { - mark = (const struct rte_flow_action_mark *)act->conf; - rule->soft_id = mark->id; - act = next_no_void_action(actions, act); - } - - /* check if the next not void item is END */ - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - return 0; -} - /* search next no void pattern and skip fuzzy */ static inline const struct rte_flow_item *next_no_fuzzy_pattern( @@ -1424,9 +1319,8 @@ static inline uint8_t signature_match(const struct rte_flow_item pattern[]) */ static int ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct ci_flow_actions *parsed_actions, struct ixgbe_fdir_rule *rule, struct rte_flow_error *error) { @@ -1447,29 +1341,39 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, const struct rte_flow_item_vlan *vlan_mask; const struct rte_flow_item_raw *raw_mask; const struct rte_flow_item_raw *raw_spec; + const struct rte_flow_action *fwd_action, *aux_action; + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint8_t j; - struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + fwd_action = parsed_actions->actions[0]; + /* can be NULL */ + aux_action = parsed_actions->actions[1]; - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } + /* check if this is a signature match */ + if (signature_match(pattern)) + rule->mode = RTE_FDIR_MODE_SIGNATURE; + else + rule->mode = RTE_FDIR_MODE_PERFECT; - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; + /* set up action */ + if (fwd_action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *q_act = fwd_action->conf; + rule->queue = q_act->index; + } else { + /* signature mode does not support drop action. */ + if (rule->mode == RTE_FDIR_MODE_SIGNATURE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, fwd_action, + "Signature mode does not support drop action."); + return -rte_errno; + } + rule->fdirflags = IXGBE_FDIRCMD_DROP; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; + /* set up mark action */ + if (aux_action != NULL && aux_action->type == RTE_FLOW_ACTION_TYPE_MARK) { + const struct rte_flow_action_mark *m_act = aux_action->conf; + rule->soft_id = m_act->id; } /** @@ -1501,11 +1405,6 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, return -rte_errno; } - if (signature_match(pattern)) - rule->mode = RTE_FDIR_MODE_SIGNATURE; - else - rule->mode = RTE_FDIR_MODE_PERFECT; - /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -2094,7 +1993,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, } } - return ixgbe_parse_fdir_act_attr(attr, actions, rule, error); + return 0; } #define NVGRE_PROTOCOL 0x6558 @@ -2137,9 +2036,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, * item->last should be NULL. */ static int -ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], +ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_item pattern[], + const struct ci_flow_actions *parsed_actions, struct ixgbe_fdir_rule *rule, struct rte_flow_error *error) { @@ -2152,27 +2050,25 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, const struct rte_flow_item_eth *eth_mask; const struct rte_flow_item_vlan *vlan_spec; const struct rte_flow_item_vlan *vlan_mask; + const struct rte_flow_action *fwd_action, *aux_action; uint32_t j; - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } + fwd_action = parsed_actions->actions[0]; + /* can be NULL */ + aux_action = parsed_actions->actions[1]; - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; + /* set up queue/drop action */ + if (fwd_action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *q_act = fwd_action->conf; + rule->queue = q_act->index; + } else { + rule->fdirflags = IXGBE_FDIRCMD_DROP; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; + /* set up mark action */ + if (aux_action != NULL && aux_action->type == RTE_FLOW_ACTION_TYPE_MARK) { + const struct rte_flow_action_mark *mark = aux_action->conf; + rule->soft_id = mark->id; } /** @@ -2583,7 +2479,56 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, * Do nothing. */ - return ixgbe_parse_fdir_act_attr(attr, actions, rule, error); + return 0; +} + +/* + * Check flow director actions + */ +static int +ixgbe_fdir_actions_check(const struct ci_flow_actions *parsed_actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) +{ + const enum rte_flow_action_type fwd_actions[] = { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END + }; + const struct rte_flow_action *action, *drop_action = NULL; + + /* do the generic checks first */ + int ret = ixgbe_flow_actions_check(parsed_actions, param, error); + if (ret) + return ret; + + /* first action must be a forwarding action */ + action = parsed_actions->actions[0]; + if (!ci_flow_action_type_in_list(action->type, fwd_actions)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "First action must be QUEUE or DROP"); + } + /* remember if we have a drop action */ + if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { + drop_action = action; + } + + /* second action, if specified, must not be a forwarding action */ + action = parsed_actions->actions[1]; + if (action != NULL && ci_flow_action_type_in_list(action->type, fwd_actions)) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Conflicting actions"); + } + /* if we didn't have a drop action before but now we do, remember that */ + if (drop_action == NULL && action != NULL && action->type == RTE_FLOW_ACTION_TYPE_DROP) { + drop_action = action; + } + /* drop must be the only action */ + if (drop_action != NULL && action != NULL) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Conflicting actions"); + } + return 0; } static int @@ -2597,25 +2542,46 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev, int ret; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ixgbe_adapter *adapter = IXGBE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_FDIR_CONF(dev); + struct rte_eth_fdir_conf *fdir_conf = IXGBE_DEV_PRIVATE_TO_FDIR_CONF(adapter); + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* queue/mark/drop allowed here */ + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev->data, + .check = ixgbe_fdir_actions_check + }; + + if (hw->mac.type != ixgbe_mac_82599EB && + hw->mac.type != ixgbe_mac_X540 && + hw->mac.type != ixgbe_mac_X550 && + hw->mac.type != ixgbe_mac_X550EM_x && + hw->mac.type != ixgbe_mac_X550EM_a && + hw->mac.type != ixgbe_mac_E610) + return -ENOTSUP; + + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + fdir_conf->drop_queue = IXGBE_FDIR_DROP_QUEUE; - if (hw->mac.type != ixgbe_mac_82599EB && - hw->mac.type != ixgbe_mac_X540 && - hw->mac.type != ixgbe_mac_X550 && - hw->mac.type != ixgbe_mac_X550EM_x && - hw->mac.type != ixgbe_mac_X550EM_a && - hw->mac.type != ixgbe_mac_E610) - return -ENOTSUP; - - ret = ixgbe_parse_fdir_filter_normal(dev, attr, pattern, - actions, rule, error); + ret = ixgbe_parse_fdir_filter_normal(dev, pattern, &parsed_actions, rule, error); if (!ret) goto step_next; - ret = ixgbe_parse_fdir_filter_tunnel(attr, pattern, - actions, rule, error); + ret = ixgbe_parse_fdir_filter_tunnel(pattern, &parsed_actions, rule, error); if (ret) return ret; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 13/29] net/ixgbe: use common checks in RSS filter 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (11 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 12/29] net/ixgbe: use common checks in FDIR filters Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 14/29] net/i40e: use common flow attribute checks Anatoly Burakov ` (15 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common attr and action parsing infrastructure in RSS filter. As a result, some checks have become more stringent, in particular: - the group attribute is now explicitly rejected instead of being ignored - RSS now explicitly rejects unsupported RSS types at validation - the priority attribute was previously allowed but rejected values bigger than 0xFFFF despite not using priority anywhere - it is now rejected Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/ixgbe/ixgbe_flow.c | 202 +++++++++++---------------- 1 file changed, 85 insertions(+), 117 deletions(-) diff --git a/drivers/net/intel/ixgbe/ixgbe_flow.c b/drivers/net/intel/ixgbe/ixgbe_flow.c index 4d7f9312a1..8f5b30d329 100644 --- a/drivers/net/intel/ixgbe/ixgbe_flow.c +++ b/drivers/net/intel/ixgbe/ixgbe_flow.c @@ -121,20 +121,6 @@ const struct rte_flow_item *next_no_void_pattern( } } -static inline -const struct rte_flow_action *next_no_void_action( - const struct rte_flow_action actions[], - const struct rte_flow_action *cur) -{ - const struct rte_flow_action *next = - cur ? cur + 1 : &actions[0]; - while (1) { - if (next->type != RTE_FLOW_ACTION_TYPE_VOID) - return next; - next++; - } -} - /* * All ixgbe engines mostly check the same stuff, so use a common check. */ @@ -2611,6 +2597,62 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev, return ret; } +/* Flow actions check specific to RSS filter */ +static int +ixgbe_flow_actions_check_rss(const struct ci_flow_actions *parsed_actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action *action = parsed_actions->actions[0]; + const struct rte_flow_action_rss *rss_act = action->conf; + struct rte_eth_dev_data *dev_data = param->driver_ctx; + const size_t rss_key_len = sizeof(((struct ixgbe_rte_flow_rss_conf *)0)->key); + size_t q_idx, q; + + /* check if queue list is not empty */ + if (rss_act->queue_num == 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queue list is empty"); + } + + /* check if each RSS queue is valid */ + for (q_idx = 0; q_idx < rss_act->queue_num; q_idx++) { + q = rss_act->queue[q_idx]; + if (q >= dev_data->nb_rx_queues) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS queue specified"); + } + } + + /* only support default hash function */ + if (rss_act->func != RTE_ETH_HASH_FUNCTION_DEFAULT) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Non-default RSS hash functions are not supported"); + } + /* levels aren't supported */ + if (rss_act->level) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "A nonzero RSS encapsulation level is not supported"); + } + /* check key length */ + if (rss_act->key_len != 0 && rss_act->key_len != rss_key_len) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key must be exactly 40 bytes long"); + } + /* filter out unsupported RSS types */ + if ((rss_act->types & ~IXGBE_RSS_OFFLOAD_ALL) != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS type specified"); + } + return 0; +} + static int ixgbe_parse_rss_filter(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -2618,109 +2660,35 @@ ixgbe_parse_rss_filter(struct rte_eth_dev *dev, struct ixgbe_rte_flow_rss_conf *rss_conf, struct rte_flow_error *error) { - const struct rte_flow_action *act; - const struct rte_flow_action_rss *rss; - uint16_t n; - - /** - * rss only supports forwarding, - * check if the first not void action is RSS. - */ - act = next_no_void_action(actions, NULL); - if (act->type != RTE_FLOW_ACTION_TYPE_RSS) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - rss = (const struct rte_flow_action_rss *)act->conf; - - if (!rss || !rss->queue_num) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, - "no valid queues"); - return -rte_errno; - } - - for (n = 0; n < rss->queue_num; n++) { - if (rss->queue[n] >= dev->data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, - "queue id > max number of queues"); - return -rte_errno; - } - } - - if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "non-default RSS hash functions are not supported"); - if (rss->level) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "a nonzero RSS encapsulation level is not supported"); - if (rss->key_len && rss->key_len != RTE_DIM(rss_conf->key)) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "RSS hash key must be exactly 40 bytes"); - if (rss->queue_num > RTE_DIM(rss_conf->queue)) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, - "too many queues for RSS context"); - if (ixgbe_rss_conf_init(rss_conf, rss)) - return rte_flow_error_set - (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, act, - "RSS context initialization failure"); - - /* check if the next not void item is END */ - act = next_no_void_action(actions, act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - - /* parse attr */ - /* must be input direction */ - if (!attr->ingress) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* not supported */ - if (attr->egress) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* not supported */ - if (attr->transfer) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "No support for transfer."); - return -rte_errno; - } - - if (attr->priority > 0xFFFF) { - memset(rss_conf, 0, sizeof(struct ixgbe_rte_flow_rss_conf)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Error priority."); - return -rte_errno; - } + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ap_param = { + .allowed_types = (const enum rte_flow_action_type[]){ + /* only rss allowed here */ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .driver_ctx = dev->data, + .check = ixgbe_flow_actions_check_rss, + .max_actions = 1, + }; + int ret; + const struct rte_flow_action *action; + + /* validate attributes */ + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + + /* parse requested actions */ + ret = ci_flow_check_actions(actions, &ap_param, &parsed_actions, error); + if (ret) + return ret; + action = parsed_actions.actions[0]; + + if (ixgbe_rss_conf_init(rss_conf, action->conf)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "RSS context initialization failure"); return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 14/29] net/i40e: use common flow attribute checks 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (12 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 13/29] net/ixgbe: use common checks in RSS filter Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 15/29] net/i40e: refactor RSS flow parameter checks Anatoly Burakov ` (14 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Bruce Richardson There is no need to call the same attribute checks in multiple places when there are no cases where we do otherwise. Therefore, remove all the attribute check calls from all filter call paths and instead check the attributes once at the very beginning of flow validation, and use common flow attribute checks instead of driver-local ones. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_ethdev.h | 1 - drivers/net/intel/i40e/i40e_flow.c | 121 +++------------------------ 2 files changed, 10 insertions(+), 112 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_ethdev.h b/drivers/net/intel/i40e/i40e_ethdev.h index d57c53f661..91ad0f8d0e 100644 --- a/drivers/net/intel/i40e/i40e_ethdev.h +++ b/drivers/net/intel/i40e/i40e_ethdev.h @@ -1315,7 +1315,6 @@ struct i40e_filter_ctx { }; typedef int (*parse_filter_t)(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 967af052ff..0543f7f297 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -26,6 +26,8 @@ #include "i40e_ethdev.h" #include "i40e_hash.h" +#include "../common/flow_check.h" + #define I40E_IPV6_TC_MASK (0xFF << I40E_FDIR_IPv6_TC_OFFSET) #define I40E_IPV6_FRAG_HEADER 44 #define I40E_TENANT_ARRAY_NUM 3 @@ -71,40 +73,32 @@ static int i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev, const struct rte_flow_action *actions, struct rte_flow_error *error, struct i40e_tunnel_filter_conf *filter); -static int i40e_flow_parse_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error); static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, struct i40e_filter_ctx *filter); static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -118,7 +112,6 @@ static int i40e_flow_flush_ethertype_filter(struct i40e_pf *pf); static int i40e_flow_flush_tunnel_filter(struct i40e_pf *pf); static int i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -130,7 +123,6 @@ i40e_flow_parse_qinq_pattern(struct rte_eth_dev *dev, struct i40e_tunnel_filter_conf *filter); static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -1208,54 +1200,6 @@ i40e_find_parse_filter_func(struct rte_flow_item *pattern, uint32_t *idx) return parse_filter; } -/* Parse attributes */ -static int -i40e_flow_parse_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error) -{ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->transfer) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "Not support transfer."); - return -rte_errno; - } - - /* Not supported */ - if (attr->priority) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Not support priority."); - return -rte_errno; - } - - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "Not support group."); - return -rte_errno; - } - - return 0; -} - static int i40e_get_outer_vlan(struct rte_eth_dev *dev, uint16_t *tpid) { @@ -1443,7 +1387,6 @@ i40e_flow_parse_ethertype_action(struct rte_eth_dev *dev, static int i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -1462,10 +1405,6 @@ i40e_flow_parse_ethertype_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_ETHERTYPE; return ret; @@ -2551,7 +2490,6 @@ i40e_flow_parse_fdir_action(struct rte_eth_dev *dev, static int i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -2568,10 +2506,6 @@ i40e_flow_parse_fdir_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_FDIR; return 0; @@ -2836,7 +2770,6 @@ i40e_flow_parse_l4_pattern(const struct rte_flow_item *pattern, static int i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -2853,10 +2786,6 @@ i40e_flow_parse_l4_cloud_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3087,7 +3016,6 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3105,10 +3033,6 @@ i40e_flow_parse_vxlan_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3338,7 +3262,6 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3356,10 +3279,6 @@ i40e_flow_parse_nvgre_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3494,7 +3413,6 @@ i40e_flow_parse_mpls_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3512,10 +3430,6 @@ i40e_flow_parse_mpls_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3646,7 +3560,6 @@ i40e_flow_parse_gtp_pattern(struct rte_eth_dev *dev, static int i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3664,10 +3577,6 @@ i40e_flow_parse_gtp_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3763,7 +3672,6 @@ i40e_flow_parse_qinq_pattern(__rte_unused struct rte_eth_dev *dev, static int i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error, @@ -3781,10 +3689,6 @@ i40e_flow_parse_qinq_filter(struct rte_eth_dev *dev, if (ret) return ret; - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter->type = RTE_ETH_FILTER_TUNNEL; return ret; @@ -3803,7 +3707,12 @@ i40e_flow_check(struct rte_eth_dev *dev, uint32_t item_num = 0; /* non-void item number of pattern*/ uint32_t i = 0; bool flag = false; - int ret = I40E_NOT_SUPPORTED; + int ret; + + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) { + return ret; + } if (!pattern) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, @@ -3818,22 +3727,11 @@ i40e_flow_check(struct rte_eth_dev *dev, return -rte_errno; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } - /* Get the non-void item of action */ while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID) i++; if ((actions + i)->type == RTE_FLOW_ACTION_TYPE_RSS) { - ret = i40e_flow_parse_attr(attr, error); - if (ret) - return ret; - filter_ctx->type = RTE_ETH_FILTER_HASH; return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error); } @@ -3858,6 +3756,7 @@ i40e_flow_check(struct rte_eth_dev *dev, i40e_pattern_skip_void_item(items, pattern); i = 0; + ret = I40E_NOT_SUPPORTED; do { parse_filter = i40e_find_parse_filter_func(items, &i); if (!parse_filter && !flag) { @@ -3870,7 +3769,7 @@ i40e_flow_check(struct rte_eth_dev *dev, } if (parse_filter) - ret = parse_filter(dev, attr, items, actions, error, filter_ctx); + ret = parse_filter(dev, items, actions, error, filter_ctx); flag = true; } while ((ret < 0) && (i < RTE_DIM(i40e_supported_patterns))); -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 15/29] net/i40e: refactor RSS flow parameter checks 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (13 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 14/29] net/i40e: use common flow attribute checks Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 16/29] net/i40e: use common action checks for ethertype Anatoly Burakov ` (13 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Bruce Richardson Currently, the hash parser parameter checks are somewhat confusing as they have multiple mutually exclusive code paths and requirements, and it's difficult to reason about them because RSS flow parsing is interspersed with validation checks. To address that, refactor hash engine error checking to perform almost all validation at the beginning, with only happy paths being implemented in actual parsing functions. Some parameter combinations that were previously ignored (and perhaps produced a warning) are now explicitly rejected: - if no pattern is specified, RSS types are rejected - for queue lists and regions, RSS key is rejected Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 13 +- drivers/net/intel/i40e/i40e_hash.c | 437 ++++++++++++++++++----------- drivers/net/intel/i40e/i40e_hash.h | 2 +- 3 files changed, 274 insertions(+), 178 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 0543f7f297..7e4f6b7169 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -3713,6 +3713,7 @@ i40e_flow_check(struct rte_eth_dev *dev, if (ret) { return ret; } + /* action and pattern validation will happen in each respective engine */ if (!pattern) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, @@ -3727,13 +3728,11 @@ i40e_flow_check(struct rte_eth_dev *dev, return -rte_errno; } - /* Get the non-void item of action */ - while ((actions + i)->type == RTE_FLOW_ACTION_TYPE_VOID) - i++; - - if ((actions + i)->type == RTE_FLOW_ACTION_TYPE_RSS) { - filter_ctx->type = RTE_ETH_FILTER_HASH; - return i40e_hash_parse(dev, pattern, actions + i, &filter_ctx->rss_conf, error); + /* try parsing as RSS */ + filter_ctx->type = RTE_ETH_FILTER_HASH; + ret = i40e_hash_parse(dev, pattern, actions, &filter_ctx->rss_conf, error); + if (!ret) { + return ret; } i = 0; diff --git a/drivers/net/intel/i40e/i40e_hash.c b/drivers/net/intel/i40e/i40e_hash.c index 5756ebf255..08bef9d60f 100644 --- a/drivers/net/intel/i40e/i40e_hash.c +++ b/drivers/net/intel/i40e/i40e_hash.c @@ -16,6 +16,8 @@ #include "i40e_ethdev.h" #include "i40e_hash.h" +#include "../common/flow_check.h" + #ifndef BIT #define BIT(n) (1UL << (n)) #endif @@ -925,12 +927,7 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act, { const uint8_t *key = rss_act->key; - if (!key || rss_act->key_len != sizeof(rss_conf->key)) { - if (rss_act->key_len != sizeof(rss_conf->key)) - PMD_DRV_LOG(WARNING, - "RSS key length invalid, must be %u bytes, now set key to default", - (uint32_t)sizeof(rss_conf->key)); - + if (key == NULL) { memcpy(rss_conf->key, i40e_rss_key_default, sizeof(rss_conf->key)); } else { memcpy(rss_conf->key, key, sizeof(rss_conf->key)); @@ -941,45 +938,29 @@ i40e_hash_parse_key(const struct rte_flow_action_rss *rss_act, } static int -i40e_hash_parse_queues(const struct rte_eth_dev *dev, - const struct rte_flow_action_rss *rss_act, - struct i40e_rte_flow_rss_conf *rss_conf, - struct rte_flow_error *error) +i40e_hash_parse_pattern_act(const struct rte_eth_dev *dev, + const struct rte_flow_item pattern[], + const struct rte_flow_action_rss *rss_act, + struct i40e_rte_flow_rss_conf *rss_conf, + struct rte_flow_error *error) { - struct i40e_pf *pf; - struct i40e_hw *hw; - uint16_t i; - uint16_t max_queue; - - hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if (!rss_act->queue_num || - rss_act->queue_num > hw->func_caps.rss_table_size) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Invalid RSS queue number"); + rss_conf->symmetric_enable = rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ; if (rss_act->key_len) - PMD_DRV_LOG(WARNING, - "RSS key is ignored when queues specified"); + i40e_hash_parse_key(rss_act, rss_conf); - pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) - max_queue = i40e_pf_calc_configured_queues_num(pf); - else - max_queue = pf->dev_data->nb_rx_queues; + rss_conf->conf.func = rss_act->func; + rss_conf->conf.types = rss_act->types; + rss_conf->inset = i40e_hash_get_inset(rss_act->types, rss_conf->symmetric_enable); - max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC); - - for (i = 0; i < rss_act->queue_num; i++) { - if (rss_act->queue[i] >= max_queue) - break; - } - - if (i < rss_act->queue_num) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Invalid RSS queues"); + return i40e_hash_get_pattern_pctypes(dev, pattern, rss_act, + rss_conf, error); +} +static int +i40e_hash_parse_queues(const struct rte_flow_action_rss *rss_act, + struct i40e_rte_flow_rss_conf *rss_conf) +{ memcpy(rss_conf->queue, rss_act->queue, rss_act->queue_num * sizeof(rss_conf->queue[0])); rss_conf->conf.queue = rss_conf->queue; @@ -988,113 +969,38 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev, } static int -i40e_hash_parse_queue_region(const struct rte_eth_dev *dev, - const struct rte_flow_item pattern[], +i40e_hash_parse_queue_region(const struct rte_flow_item pattern[], const struct rte_flow_action_rss *rss_act, struct i40e_rte_flow_rss_conf *rss_conf, struct rte_flow_error *error) { - struct i40e_pf *pf; const struct rte_flow_item_vlan *vlan_spec, *vlan_mask; - uint64_t hash_queues; - uint32_t i; - - if (pattern[1].type != RTE_FLOW_ITEM_TYPE_END) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - &pattern[1], - "Pattern not supported."); vlan_spec = pattern->spec; vlan_mask = pattern->mask; - if (!vlan_spec || !vlan_mask || - (rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 7) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, pattern, - "Pattern error."); - if (!rss_act->queue) + /* VLAN must have spec and mask */ + if (vlan_spec == NULL || vlan_mask == NULL) { return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Queues not specified"); - - if (rss_act->key_len) - PMD_DRV_LOG(WARNING, - "RSS key is ignored when configure queue region"); + RTE_FLOW_ERROR_TYPE_ITEM, &pattern[0], + "VLAN pattern spec and mask required"); + } + /* for mask, VLAN/TCI must be masked appropriately */ + if ((rte_be_to_cpu_16(vlan_mask->hdr.vlan_tci) >> 13) != 0x7) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, &pattern[0], + "VLAN pattern mask invalid"); + } /* Use a 64 bit variable to represent all queues in a region. */ RTE_BUILD_BUG_ON(I40E_MAX_Q_PER_TC > 64); - if (!rss_act->queue_num || - !rte_is_power_of_2(rss_act->queue_num) || - rss_act->queue_num + rss_act->queue[0] > I40E_MAX_Q_PER_TC) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Queue number error"); - - for (i = 1; i < rss_act->queue_num; i++) { - if (rss_act->queue[i - 1] + 1 != rss_act->queue[i]) - break; - } - - if (i < rss_act->queue_num) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "Queues must be incremented continuously"); - - /* Map all queues to bits of uint64_t */ - hash_queues = (BIT_ULL(rss_act->queue[0] + rss_act->queue_num) - 1) & - ~(BIT_ULL(rss_act->queue[0]) - 1); - - pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - if (hash_queues & ~pf->hash_enabled_queues) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "Some queues are not in LUT"); - rss_conf->region_queue_num = (uint8_t)rss_act->queue_num; rss_conf->region_queue_start = rss_act->queue[0]; rss_conf->region_priority = rte_be_to_cpu_16(vlan_spec->hdr.vlan_tci) >> 13; return 0; } -static int -i40e_hash_parse_global_conf(const struct rte_eth_dev *dev, - const struct rte_flow_item pattern[], - const struct rte_flow_action_rss *rss_act, - struct i40e_rte_flow_rss_conf *rss_conf, - struct rte_flow_error *error) -{ - if (rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "Symmetric function should be set with pattern types"); - - rss_conf->conf.func = rss_act->func; - - if (rss_act->types) - PMD_DRV_LOG(WARNING, - "RSS types are ignored when no pattern specified"); - - if (pattern[0].type == RTE_FLOW_ITEM_TYPE_VLAN) - return i40e_hash_parse_queue_region(dev, pattern, rss_act, - rss_conf, error); - - if (rss_act->queue) - return i40e_hash_parse_queues(dev, rss_act, rss_conf, error); - - if (rss_act->key_len) { - i40e_hash_parse_key(rss_act, rss_conf); - return 0; - } - - if (rss_act->func == RTE_ETH_HASH_FUNCTION_DEFAULT) - PMD_DRV_LOG(WARNING, "Nothing change"); - return 0; -} - static bool i40e_hash_validate_rss_types(uint64_t rss_types) { @@ -1124,83 +1030,274 @@ i40e_hash_validate_rss_types(uint64_t rss_types) } static int -i40e_hash_parse_pattern_act(const struct rte_eth_dev *dev, - const struct rte_flow_item pattern[], - const struct rte_flow_action_rss *rss_act, - struct i40e_rte_flow_rss_conf *rss_conf, - struct rte_flow_error *error) +i40e_hash_validate_rss_pattern(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) { - if (rss_act->queue) + const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf; + + /* queue list is not supported */ + if (rss_act->queue_num != 0) { return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "RSS Queues not supported when pattern specified"); - rss_conf->symmetric_enable = false; /* by default, symmetric is disabled */ + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queues not supported when pattern specified"); + } + /* disallow unsupported hash functions */ switch (rss_act->func) { case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ: - rss_conf->symmetric_enable = true; - break; case RTE_ETH_HASH_FUNCTION_DEFAULT: case RTE_ETH_HASH_FUNCTION_TOEPLITZ: case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR: break; default: return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, - "RSS hash function not supported " - "when pattern specified"); + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS hash function not supported when pattern specified"); } if (!i40e_hash_validate_rss_types(rss_act->types)) return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - NULL, "RSS types are invalid"); + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss_act, "RSS types are invalid"); - if (rss_act->key_len) - i40e_hash_parse_key(rss_act, rss_conf); + /* check RSS key length if it is specified */ + if (rss_act->key_len != 0 && rss_act->key_len != I40E_RSS_KEY_LEN) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key length must be 52 bytes"); + } - rss_conf->conf.func = rss_act->func; - rss_conf->conf.types = rss_act->types; - rss_conf->inset = i40e_hash_get_inset(rss_act->types, rss_conf->symmetric_enable); + return 0; +} - return i40e_hash_get_pattern_pctypes(dev, pattern, rss_act, - rss_conf, error); +static int +i40e_hash_validate_rss_common(const struct rte_flow_action_rss *rss_act, + struct rte_flow_error *error) +{ + /* RSS level is not supported */ + if (rss_act->level != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS level is not supported"); + } + + /* for empty patterns, symmetric toeplitz is not supported */ + if (rss_act->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Symmetric hash function not supported without specific patterns"); + } + + /* hash types are not supported for global RSS configuration */ + if (rss_act->types != 0) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS types not supported without a pattern"); + } + + /* check RSS key length if it is specified */ + if (rss_act->key_len != 0 && rss_act->key_len != I40E_RSS_KEY_LEN) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key length must be 52 bytes"); + } + + return 0; +} + +static int +i40e_hash_validate_queue_region(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf; + struct i40e_adapter *adapter = I40E_DEV_PRIVATE_TO_ADAPTER(param->driver_ctx); + struct i40e_pf *pf = &adapter->pf; + uint64_t hash_queues; + int ret; + + ret = i40e_hash_validate_rss_common(rss_act, error); + if (ret) + return ret; + + RTE_BUILD_BUG_ON(sizeof(hash_queues) != sizeof(pf->hash_enabled_queues)); + + /* having RSS key is not supported */ + if (rss_act->key != NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key not supported"); + } + + /* queue region must be specified */ + if (rss_act->queue_num == 0) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queues missing"); + } + + /* queue region must be power of two */ + if (!rte_is_power_of_2(rss_act->queue_num)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS queue number must be power of two"); + } + + /* generic checks already filtered out discontiguous/non-unique RSS queues */ + + /* queues must not exceed maximum queues per traffic class */ + if (rss_act->queue[rss_act->queue_num - 1] >= I40E_MAX_Q_PER_TC) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS queue index"); + } + + /* queues must be in LUT */ + hash_queues = (BIT_ULL(rss_act->queue[0] + rss_act->queue_num) - 1) & + ~(BIT_ULL(rss_act->queue[0]) - 1); + + if (hash_queues & ~pf->hash_enabled_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss_act, "Some queues are not in LUT"); + } + + return 0; +} + +static int +i40e_hash_validate_queue_list(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss_act = actions->actions[0]->conf; + struct i40e_adapter *adapter = I40E_DEV_PRIVATE_TO_ADAPTER(param->driver_ctx); + struct i40e_pf *pf; + struct i40e_hw *hw; + uint16_t max_queue; + bool has_queue, has_key; + int ret; + + ret = i40e_hash_validate_rss_common(rss_act, error); + if (ret) + return ret; + + has_queue = rss_act->queue != NULL; + has_key = rss_act->key != NULL; + + /* if we have queues, we must not have key */ + if (has_queue && has_key) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "RSS key for queue region is not supported"); + } + + /* if there are no queues, no further checks needed */ + if (!has_queue) + return 0; + + /* check queue number limits */ + hw = &adapter->hw; + if (rss_act->queue_num > hw->func_caps.rss_table_size) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss_act, "Too many RSS queues"); + } + + pf = &adapter->pf; + if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) + max_queue = i40e_pf_calc_configured_queues_num(pf); + else + max_queue = pf->dev_data->nb_rx_queues; + + max_queue = RTE_MIN(max_queue, I40E_MAX_Q_PER_TC); + + /* we know RSS queues are contiguous so we only need to check last queue */ + if (rss_act->queue[rss_act->queue_num - 1] >= max_queue) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss_act, + "Invalid RSS queue"); + } + + return 0; } int -i40e_hash_parse(const struct rte_eth_dev *dev, +i40e_hash_parse(struct rte_eth_dev *dev, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct i40e_rte_flow_rss_conf *rss_conf, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .driver_ctx = dev->data->dev_private + /* each pattern type will add specific check function */ + }; const struct rte_flow_action_rss *rss_act; + int ret; - if (actions[1].type != RTE_FLOW_ACTION_TYPE_END) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - &actions[1], - "Only support one action for RSS."); - - rss_act = (const struct rte_flow_action_rss *)actions[0].conf; - if (rss_act->level) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - actions, - "RSS level is not supported"); + /* + * We have two possible paths: global RSS configuration, and an RSS pattern action. + * + * For global patterns, we act on two types of flows: + * - Empty pattern ([END]) + * - VLAN pattern ([VLAN] -> [END]) + * + * Everything else is handled by pattern action parser. + */ + bool is_empty, is_vlan; while (pattern->type == RTE_FLOW_ITEM_TYPE_VOID) pattern++; - if (pattern[0].type == RTE_FLOW_ITEM_TYPE_END || - pattern[0].type == RTE_FLOW_ITEM_TYPE_VLAN) - return i40e_hash_parse_global_conf(dev, pattern, rss_act, - rss_conf, error); + is_empty = pattern[0].type == RTE_FLOW_ITEM_TYPE_END; + is_vlan = pattern[0].type == RTE_FLOW_ITEM_TYPE_VLAN && + pattern[1].type == RTE_FLOW_ITEM_TYPE_END; - return i40e_hash_parse_pattern_act(dev, pattern, rss_act, - rss_conf, error); + /* VLAN path */ + if (is_vlan) { + ac_param.check = i40e_hash_validate_queue_region; + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + rss_act = parsed_actions.actions[0]->conf; + /* set up RSS functions */ + rss_conf->conf.func = rss_act->func; + return i40e_hash_parse_queue_region(pattern, rss_act, rss_conf, error); + } + /* Empty pattern path */ + if (is_empty) { + ac_param.check = i40e_hash_validate_queue_list; + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + rss_act = parsed_actions.actions[0]->conf; + rss_conf->conf.func = rss_act->func; + /* if there is a queue list, take that path */ + if (rss_act->queue != NULL) { + return i40e_hash_parse_queues(rss_act, rss_conf); + } + /* otherwise just parse RSS key */ + if (rss_act->key != NULL) { + i40e_hash_parse_key(rss_act, rss_conf); + } + return 0; + } + ac_param.check = i40e_hash_validate_rss_pattern; + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + rss_act = parsed_actions.actions[0]->conf; + + /* pattern case */ + return i40e_hash_parse_pattern_act(dev, pattern, rss_act, rss_conf, error); } static void diff --git a/drivers/net/intel/i40e/i40e_hash.h b/drivers/net/intel/i40e/i40e_hash.h index 2513d84565..99df4bccd0 100644 --- a/drivers/net/intel/i40e/i40e_hash.h +++ b/drivers/net/intel/i40e/i40e_hash.h @@ -13,7 +13,7 @@ extern "C" { #endif -int i40e_hash_parse(const struct rte_eth_dev *dev, +int i40e_hash_parse(struct rte_eth_dev *dev, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct i40e_rte_flow_rss_conf *rss_conf, -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 16/29] net/i40e: use common action checks for ethertype 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (14 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 15/29] net/i40e: refactor RSS flow parameter checks Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 17/29] net/i40e: use common action checks for FDIR Anatoly Burakov ` (12 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Bruce Richardson Use the common flow action checking parsing infrastructure for checking flow actions for ethertype filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 56 +++++++++++++----------------- 1 file changed, 24 insertions(+), 32 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 7e4f6b7169..2638067580 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -1345,43 +1345,35 @@ i40e_flow_parse_ethertype_action(struct rte_eth_dev *dev, struct rte_flow_error *error, struct rte_eth_ethertype_filter *filter) { - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - const struct rte_flow_action *act; - const struct rte_flow_action_queue *act_q; - uint32_t index = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END, + }, + .max_actions = 1, + }; + const struct rte_flow_action *action; + int ret; - /* Check if the first non-void action is QUEUE or DROP. */ - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE && - act->type != RTE_FLOW_ACTION_TYPE_DROP) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + action = parsed_actions.actions[0]; - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = act->conf; + if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *act_q = action->conf; + /* check queue index */ + if (act_q->index >= dev->data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid queue index"); + } filter->queue = act_q->index; - if (filter->queue >= pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid queue ID for" - " ethertype_filter."); - return -rte_errno; - } - } else { + } else if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { filter->flags |= RTE_ETHTYPE_FLAGS_DROP; } - - /* Check if the next non-void item is END */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } - return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 17/29] net/i40e: use common action checks for FDIR 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (15 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 16/29] net/i40e: use common action checks for ethertype Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 18/29] net/i40e: use common action checks for tunnel Anatoly Burakov ` (11 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Bruce Richardson Use the common flow action checking parsing infrastructure for checking flow actions for flow director filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 139 ++++++++++++++++------------- 1 file changed, 76 insertions(+), 63 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index 2638067580..c67556c35c 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -2383,28 +2383,49 @@ i40e_flow_parse_fdir_action(struct rte_eth_dev *dev, struct i40e_fdir_filter_conf *filter) { struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - const struct rte_flow_action *act; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark_spec = NULL; - uint32_t index = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_PASSTHRU, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_FLAG, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + }; + const struct rte_flow_action *first, *second; + int ret; - /* Check if the first non-void action is QUEUE or DROP or PASSTHRU. */ - NEXT_ITEM_OF_ACTION(act, actions, index); - switch (act->type) { + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + first = parsed_actions.actions[0]; + /* can be NULL */ + second = parsed_actions.actions[1]; + + switch (first->type) { case RTE_FLOW_ACTION_TYPE_QUEUE: - act_q = act->conf; + { + const struct rte_flow_action_queue *act_q = first->conf; + /* check against PF constraints */ + if (!filter->input.flow_ext.is_vf && act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid queue ID for FDIR"); + } + /* check against VF constraints */ + if (filter->input.flow_ext.is_vf && act_q->index >= pf->vf_nb_qps) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid queue ID for FDIR"); + } filter->action.rx_queue = act_q->index; - if ((!filter->input.flow_ext.is_vf && - filter->action.rx_queue >= pf->dev_data->nb_rx_queues) || - (filter->input.flow_ext.is_vf && - filter->action.rx_queue >= pf->vf_nb_qps)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid queue ID for FDIR."); - return -rte_errno; - } filter->action.behavior = I40E_FDIR_ACCEPT; break; + } case RTE_FLOW_ACTION_TYPE_DROP: filter->action.behavior = I40E_FDIR_REJECT; break; @@ -2412,69 +2433,61 @@ i40e_flow_parse_fdir_action(struct rte_eth_dev *dev, filter->action.behavior = I40E_FDIR_PASSTHRU; break; case RTE_FLOW_ACTION_TYPE_MARK: + { + const struct rte_flow_action_mark *act_m = first->conf; filter->action.behavior = I40E_FDIR_PASSTHRU; - mark_spec = act->conf; filter->action.report_status = I40E_FDIR_REPORT_ID; - filter->soft_id = mark_spec->id; - break; + filter->soft_id = act_m->id; + break; + } default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid first action for FDIR"); } - /* Check if the next non-void item is MARK or FLAG or END. */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - switch (act->type) { + /* do we have another? */ + if (second == NULL) + return 0; + + switch (second->type) { case RTE_FLOW_ACTION_TYPE_MARK: - if (mark_spec) { - /* Double MARK actions requested */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + { + const struct rte_flow_action_mark *act_m = second->conf; + /* only one mark action can be specified */ + if (first->type == RTE_FLOW_ACTION_TYPE_MARK) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } - mark_spec = act->conf; filter->action.report_status = I40E_FDIR_REPORT_ID; - filter->soft_id = mark_spec->id; + filter->soft_id = act_m->id; break; + } case RTE_FLOW_ACTION_TYPE_FLAG: - if (mark_spec) { - /* MARK + FLAG not supported */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + { + /* mark + flag is unsupported */ + if (first->type == RTE_FLOW_ACTION_TYPE_MARK) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } filter->action.report_status = I40E_FDIR_NO_REPORT_STATUS; break; + } case RTE_FLOW_ACTION_TYPE_RSS: - if (filter->action.behavior != I40E_FDIR_PASSTHRU) { - /* RSS filter won't be next if FDIR did not pass thru */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; + /* RSS filter only can be after passthru or mark */ + if (first->type != RTE_FLOW_ACTION_TYPE_PASSTHRU && + first->type != RTE_FLOW_ACTION_TYPE_MARK) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } break; - case RTE_FLOW_ACTION_TYPE_END: - return 0; default: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid action."); - return -rte_errno; - } - - /* Check if the next non-void item is END */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid action."); - return -rte_errno; + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, second, + "Invalid second action for FDIR"); } return 0; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 18/29] net/i40e: use common action checks for tunnel 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (16 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 17/29] net/i40e: use common action checks for FDIR Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 19/29] net/iavf: use common flow attribute checks Anatoly Burakov ` (10 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Bruce Richardson Use the common flow action checking parsing infrastructure for checking flow actions for flow director tunnel filter. As a result, more stringent checks are performed against parametersm specifically the following: - reject NULL conf for VF action (instead of attempt at NULL dereference) - second action was expected to be QUEUE but if it wasn't it was ignored Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/i40e/i40e_flow.c | 104 +++++++++++++---------------- 1 file changed, 48 insertions(+), 56 deletions(-) diff --git a/drivers/net/intel/i40e/i40e_flow.c b/drivers/net/intel/i40e/i40e_flow.c index c67556c35c..dbc9af4b42 100644 --- a/drivers/net/intel/i40e/i40e_flow.c +++ b/drivers/net/intel/i40e/i40e_flow.c @@ -1101,15 +1101,6 @@ static struct i40e_valid_pattern i40e_supported_patterns[] = { { pattern_fdir_ipv6_sctp, i40e_flow_parse_l4_cloud_filter }, }; -#define NEXT_ITEM_OF_ACTION(act, actions, index) \ - do { \ - act = actions + index; \ - while (act->type == RTE_FLOW_ACTION_TYPE_VOID) { \ - index++; \ - act = actions + index; \ - } \ - } while (0) - /* Find the first VOID or non-VOID item pointer */ static const struct rte_flow_item * i40e_find_first_item(const struct rte_flow_item *item, bool is_void) @@ -2526,61 +2517,62 @@ i40e_flow_parse_tunnel_action(struct rte_eth_dev *dev, struct i40e_tunnel_filter_conf *filter) { struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - const struct rte_flow_action *act; const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_vf *act_vf; - uint32_t index = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param ac_param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_PF, + RTE_FLOW_ACTION_TYPE_VF, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + }; + const struct rte_flow_action *first, *second; + int ret; - /* Check if the first non-void action is PF or VF. */ - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_PF && - act->type != RTE_FLOW_ACTION_TYPE_VF) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; - } + ret = ci_flow_check_actions(actions, &ac_param, &parsed_actions, error); + if (ret) + return ret; + first = parsed_actions.actions[0]; + /* can be NULL */ + second = parsed_actions.actions[1]; - if (act->type == RTE_FLOW_ACTION_TYPE_VF) { - act_vf = act->conf; - filter->vf_id = act_vf->id; + /* first action must be PF or VF */ + if (first->type == RTE_FLOW_ACTION_TYPE_VF) { + const struct rte_flow_action_vf *vf = first->conf; + if (vf->id >= pf->vf_num) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Invalid VF ID for tunnel filter"); + return -rte_errno; + } + filter->vf_id = vf->id; filter->is_to_vf = 1; - if (filter->vf_id >= pf->vf_num) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid VF ID for tunnel filter"); - return -rte_errno; - } + } else if (first->type != RTE_FLOW_ACTION_TYPE_PF) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, first, + "Unsupported action"); } - /* Check if the next non-void item is QUEUE */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - act_q = act->conf; - filter->queue_id = act_q->index; - if ((!filter->is_to_vf) && - (filter->queue_id >= pf->dev_data->nb_rx_queues)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid queue ID for tunnel filter"); - return -rte_errno; - } else if (filter->is_to_vf && - (filter->queue_id >= pf->vf_nb_qps)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid queue ID for tunnel filter"); - return -rte_errno; - } - } + /* check if second action is QUEUE */ + if (second == NULL) + return 0; - /* Check if the next non-void item is END */ - index++; - NEXT_ITEM_OF_ACTION(act, actions, index); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - act, "Not supported action."); - return -rte_errno; + act_q = second->conf; + /* check queue ID for PF flow */ + if (!filter->is_to_vf && act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q, + "Invalid queue ID for tunnel filter"); + } + /* check queue ID for VF flow */ + if (filter->is_to_vf && act_q->index >= pf->vf_nb_qps) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q, + "Invalid queue ID for tunnel filter"); } + filter->queue_id = act_q->index; return 0; } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 19/29] net/iavf: use common flow attribute checks 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (17 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 18/29] net/i40e: use common action checks for tunnel Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 20/29] net/iavf: use common action checks for IPsec Anatoly Burakov ` (9 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Replace custom attr checks with a call to common checks. Flow subscription engine supports priority but other engines don't, so we move the attribute checks into the engines. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_fdir.c | 8 ++- drivers/net/intel/iavf/iavf_fsub.c | 20 ++++++- drivers/net/intel/iavf/iavf_generic_flow.c | 67 +++------------------- drivers/net/intel/iavf/iavf_generic_flow.h | 2 +- drivers/net/intel/iavf/iavf_hash.c | 10 ++-- drivers/net/intel/iavf/iavf_ipsec_crypto.c | 8 ++- 6 files changed, 43 insertions(+), 72 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c index 9eae874800..7dce5086cf 100644 --- a/drivers/net/intel/iavf/iavf_fdir.c +++ b/drivers/net/intel/iavf/iavf_fdir.c @@ -17,6 +17,7 @@ #include "iavf.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #include "virtchnl.h" #include "iavf_rxtx.h" @@ -1592,7 +1593,7 @@ iavf_fdir_parse(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1603,8 +1604,9 @@ iavf_fdir_parse(struct iavf_adapter *ad, memset(filter, 0, sizeof(*filter)); - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (!item) diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c index bfb34695de..010c1d5a44 100644 --- a/drivers/net/intel/iavf/iavf_fsub.c +++ b/drivers/net/intel/iavf/iavf_fsub.c @@ -20,6 +20,7 @@ #include <rte_flow.h> #include <iavf.h> #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #define MAX_QGRP_NUM_TYPE 7 #define IAVF_IPV6_ADDR_LENGTH 16 @@ -725,12 +726,15 @@ iavf_fsub_parse(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { struct iavf_fsub_conf *filter; struct iavf_pattern_match_item *pattern_match_item = NULL; + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; int ret = 0; filter = rte_zmalloc(NULL, sizeof(*filter), 0); @@ -741,6 +745,18 @@ iavf_fsub_parse(struct iavf_adapter *ad, return -ENOMEM; } + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + goto error; + + if (attr->priority > 1) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Only support priority 0 and 1."); + ret = -rte_errno; + goto error; + } + /* search flow subscribe pattern */ pattern_match_item = iavf_search_pattern_match_item(pattern, array, array_len, error); @@ -762,7 +778,7 @@ iavf_fsub_parse(struct iavf_adapter *ad, goto error; /* parse flow subscribe pattern action */ - ret = iavf_fsub_parse_action((void *)ad, actions, priority, + ret = iavf_fsub_parse_action((void *)ad, actions, attr->priority, error, filter); error: diff --git a/drivers/net/intel/iavf/iavf_generic_flow.c b/drivers/net/intel/iavf/iavf_generic_flow.c index 42ecc90d1d..b8f6414b16 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.c +++ b/drivers/net/intel/iavf/iavf_generic_flow.c @@ -1785,7 +1785,7 @@ enum rte_flow_item_type iavf_pattern_eth_ipv6_udp_l2tpv2_ppp_ipv6_tcp[] = { typedef struct iavf_flow_engine * (*parse_engine_t)(struct iavf_adapter *ad, struct rte_flow *flow, struct iavf_parser_list *parser_list, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); @@ -1939,45 +1939,6 @@ iavf_unregister_parser(struct iavf_flow_parser *parser, } } -static int -iavf_flow_valid_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error) -{ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* support priority for flow subscribe */ - if (attr->priority > 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Only support priority 0 and 1."); - return -rte_errno; - } - - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "Not support group."); - return -rte_errno; - } - - return 0; -} - /* Find the first VOID or non-VOID item pointer */ static const struct rte_flow_item * iavf_find_first_item(const struct rte_flow_item *item, bool is_void) @@ -2106,7 +2067,7 @@ static struct iavf_flow_engine * iavf_parse_engine_create(struct iavf_adapter *ad, struct rte_flow *flow, struct iavf_parser_list *parser_list, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2120,7 +2081,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad, if (parser_node->parser->parse_pattern_action(ad, parser_node->parser->array, parser_node->parser->array_len, - pattern, actions, priority, &meta, error) < 0) + pattern, actions, attr, &meta, error) < 0) continue; engine = parser_node->parser->engine; @@ -2136,7 +2097,7 @@ static struct iavf_flow_engine * iavf_parse_engine_validate(struct iavf_adapter *ad, struct rte_flow *flow, struct iavf_parser_list *parser_list, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2150,7 +2111,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad, if (parser_node->parser->parse_pattern_action(ad, parser_node->parser->array, parser_node->parser->array_len, - pattern, actions, priority, &meta, error) < 0) + pattern, actions, attr, &meta, error) < 0) continue; engine = parser_node->parser->engine; @@ -2182,7 +2143,6 @@ iavf_flow_process_filter(struct rte_eth_dev *dev, parse_engine_t iavf_parse_engine, struct rte_flow_error *error) { - int ret = IAVF_ERR_CONFIG; struct iavf_adapter *ad = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); @@ -2200,29 +2160,18 @@ iavf_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } - - ret = iavf_flow_valid_attr(attr, error); - if (ret) - return ret; - *engine = iavf_parse_engine(ad, flow, &vf->rss_parser_list, - attr->priority, pattern, actions, error); + attr, pattern, actions, error); if (*engine) return 0; *engine = iavf_parse_engine(ad, flow, &vf->dist_parser_list, - attr->priority, pattern, actions, error); + attr, pattern, actions, error); if (*engine) return 0; *engine = iavf_parse_engine(ad, flow, &vf->ipsec_crypto_parser_list, - attr->priority, pattern, actions, error); + attr, pattern, actions, error); if (*engine) return 0; diff --git a/drivers/net/intel/iavf/iavf_generic_flow.h b/drivers/net/intel/iavf/iavf_generic_flow.h index b97cf8b7ff..ddc554996d 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.h +++ b/drivers/net/intel/iavf/iavf_generic_flow.h @@ -471,7 +471,7 @@ typedef int (*parse_pattern_action_t)(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c index cb10eeab78..3607d6d680 100644 --- a/drivers/net/intel/iavf/iavf_hash.c +++ b/drivers/net/intel/iavf/iavf_hash.c @@ -22,6 +22,7 @@ #include "iavf_log.h" #include "iavf.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #define IAVF_PHINT_NONE 0 #define IAVF_PHINT_GTPU BIT_ULL(0) @@ -86,7 +87,7 @@ iavf_hash_parse_pattern_action(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); @@ -1519,7 +1520,7 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1528,8 +1529,9 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, uint64_t phint = IAVF_PHINT_NONE; int ret = 0; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c index 47102e75f2..fd35997cbd 100644 --- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c +++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c @@ -14,6 +14,7 @@ #include "iavf_rxtx.h" #include "iavf_log.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" #include "iavf_ipsec_crypto.h" #include "iavf_ipsec_crypto_capabilities.h" @@ -1951,15 +1952,16 @@ iavf_ipsec_flow_parse(struct iavf_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { struct iavf_pattern_match_item *item = NULL; int ret = -1; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (item && item->meta) { -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 20/29] net/iavf: use common action checks for IPsec 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (18 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 19/29] net/iavf: use common flow attribute checks Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 21/29] net/iavf: use common action checks for hash Anatoly Burakov ` (8 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for IPsec filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_ipsec_crypto.c | 35 ++++++++++------------ 1 file changed, 15 insertions(+), 20 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_ipsec_crypto.c b/drivers/net/intel/iavf/iavf_ipsec_crypto.c index fd35997cbd..f3a1f89468 100644 --- a/drivers/net/intel/iavf/iavf_ipsec_crypto.c +++ b/drivers/net/intel/iavf/iavf_ipsec_crypto.c @@ -1745,26 +1745,12 @@ parse_udp_item(const struct rte_flow_item_udp *item, struct rte_udp_hdr *udp) udp->src_port = item->hdr.src_port; } -static int -has_security_action(const struct rte_flow_action actions[], - const void **session) -{ - /* only {SECURITY; END} supported */ - if (actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY && - actions[1].type == RTE_FLOW_ACTION_TYPE_END) { - *session = actions[0].conf; - return true; - } - return false; -} - static struct iavf_ipsec_flow_item * iavf_ipsec_flow_item_parse(struct rte_eth_dev *ethdev, const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], + const struct rte_security_session *session, uint32_t type) { - const void *session; struct iavf_ipsec_flow_item *ipsec_flow = rte_malloc("security-flow-rule", sizeof(struct iavf_ipsec_flow_item), 0); @@ -1831,9 +1817,6 @@ iavf_ipsec_flow_item_parse(struct rte_eth_dev *ethdev, goto flow_cleanup; } - if (!has_security_action(actions, &session)) - goto flow_cleanup; - if (!iavf_ipsec_crypto_action_valid(ethdev, session, ipsec_flow->spi)) goto flow_cleanup; @@ -1956,6 +1939,14 @@ iavf_ipsec_flow_parse(struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_SECURITY, + RTE_FLOW_ACTION_TYPE_END, + }, + .max_actions = 1, + }; struct iavf_pattern_match_item *item = NULL; int ret = -1; @@ -1963,12 +1954,16 @@ iavf_ipsec_flow_parse(struct iavf_adapter *ad, if (ret) return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret < 0) + return ret; + item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (item && item->meta) { + const struct rte_security_session *session = parsed_actions.actions[0]->conf; uint32_t type = (uint64_t)(item->meta); struct iavf_ipsec_flow_item *fi = - iavf_ipsec_flow_item_parse(ad->vf.eth_dev, - pattern, actions, type); + iavf_ipsec_flow_item_parse(ad->vf.eth_dev, pattern, session, type); if (fi && meta) { *meta = fi; ret = 0; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 21/29] net/iavf: use common action checks for hash 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (19 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 20/29] net/iavf: use common action checks for IPsec Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 22/29] net/iavf: use common action checks for FDIR Anatoly Burakov ` (7 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for hash filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_hash.c | 143 +++++++++++++++-------------- 1 file changed, 72 insertions(+), 71 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_hash.c b/drivers/net/intel/iavf/iavf_hash.c index 3607d6d680..75dde35764 100644 --- a/drivers/net/intel/iavf/iavf_hash.c +++ b/drivers/net/intel/iavf/iavf_hash.c @@ -1427,95 +1427,81 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func, } static int -iavf_hash_parse_action(struct iavf_pattern_match_item *match_item, - const struct rte_flow_action actions[], - uint64_t pattern_hint, struct iavf_rss_meta *rss_meta, - struct rte_flow_error *error) +iavf_hash_parse_rss_type(struct iavf_pattern_match_item *match_item, + const struct rte_flow_action_rss *rss, + uint64_t pattern_hint, struct iavf_rss_meta *rss_meta, + struct rte_flow_error *error) { - enum rte_flow_action_type action_type; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action; uint64_t rss_type; - /* Supported action is RSS. */ - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - rss = action->conf; - rss_type = rss->types; + rss_meta->rss_algorithm = rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ ? + VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC : + VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC; - if (rss->func == - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR){ - rss_meta->rss_algorithm = - VIRTCHNL_RSS_ALG_XOR_ASYMMETRIC; - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "function simple_xor is not supported"); - } else if (rss->func == - RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { - rss_meta->rss_algorithm = - VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC; - } else { - rss_meta->rss_algorithm = - VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC; - } + /* If pattern type is raw, no need to refine rss type */ + if (pattern_hint == IAVF_PHINT_RAW) + return 0; - if (rss->level) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS encapsulation level is not supported"); + /** + * Check simultaneous use of SRC_ONLY and DST_ONLY + * of the same level. + */ + rss_type = rte_eth_rss_hf_refine(rss->types); - if (rss->key_len) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS key_len is not supported"); + if (iavf_any_invalid_rss_type(rss->func, rss_type, match_item->input_set_mask)) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS type not supported"); + } - if (rss->queue_num) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a non-NULL RSS queue is not supported"); + memcpy(&rss_meta->proto_hdrs, match_item->meta, sizeof(struct virtchnl_proto_hdrs)); - /* If pattern type is raw, no need to refine rss type */ - if (pattern_hint == IAVF_PHINT_RAW) - break; + iavf_refine_proto_hdrs(&rss_meta->proto_hdrs, rss_type, pattern_hint); - /** - * Check simultaneous use of SRC_ONLY and DST_ONLY - * of the same level. - */ - rss_type = rte_eth_rss_hf_refine(rss_type); + return 0; +} - if (iavf_any_invalid_rss_type(rss->func, rss_type, - match_item->input_set_mask)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - action, "RSS type not supported"); +static int +iavf_hash_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss = actions->actions[0]->conf; - memcpy(&rss_meta->proto_hdrs, match_item->meta, - sizeof(struct virtchnl_proto_hdrs)); + /* filter out unsupported RSS functions */ + switch (rss->func) { + case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR: + case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Selected RSS hash function not supported"); + default: + break; + } - iavf_refine_proto_hdrs(&rss_meta->proto_hdrs, - rss_type, pattern_hint); - break; + if (rss->level != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Nonzero RSS encapsulation level is not supported"); + } - case RTE_FLOW_ACTION_TYPE_END: - break; + if (rss->key_len != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS key is not supported"); + } - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "Invalid action."); - return -rte_errno; - } + if (rss->queue_num != 0) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "RSS queue region is not supported"); } return 0; } static int -iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, +iavf_hash_parse_pattern_action(struct iavf_adapter *ad, struct iavf_pattern_match_item *array, uint32_t array_len, const struct rte_flow_item pattern[], @@ -1524,6 +1510,17 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .driver_ctx = ad, + .check = iavf_hash_parse_action_check, + }; + const struct rte_flow_action_rss *rss; struct iavf_pattern_match_item *pattern_match_item; struct iavf_rss_meta *rss_meta_ptr; uint64_t phint = IAVF_PHINT_NONE; @@ -1533,6 +1530,10 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, if (ret) return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { rte_flow_error_set(error, EINVAL, @@ -1565,8 +1566,8 @@ iavf_hash_parse_pattern_action(__rte_unused struct iavf_adapter *ad, } } - ret = iavf_hash_parse_action(pattern_match_item, actions, phint, - rss_meta_ptr, error); + rss = parsed_actions.actions[0]->conf; + ret = iavf_hash_parse_rss_type(pattern_match_item, rss, phint, rss_meta_ptr, error); error: if (!ret && meta) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 22/29] net/iavf: use common action checks for FDIR 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (20 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 21/29] net/iavf: use common action checks for hash Anatoly Burakov @ 2026-04-10 13:13 ` Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 23/29] net/iavf: use common action checks for fsub Anatoly Burakov ` (6 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:13 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for FDIR filter. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_fdir.c | 359 +++++++++++++---------------- 1 file changed, 157 insertions(+), 202 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_fdir.c b/drivers/net/intel/iavf/iavf_fdir.c index 7dce5086cf..1f17f8fa24 100644 --- a/drivers/net/intel/iavf/iavf_fdir.c +++ b/drivers/net/intel/iavf/iavf_fdir.c @@ -441,204 +441,6 @@ static struct iavf_flow_engine iavf_fdir_engine = { .type = IAVF_FLOW_ENGINE_FDIR, }; -static int -iavf_fdir_parse_action_qregion(struct iavf_adapter *ad, - struct rte_flow_error *error, - const struct rte_flow_action *act, - struct virtchnl_filter_action *filter_action) -{ - struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); - const struct rte_flow_action_rss *rss = act->conf; - uint32_t i; - - if (act->type != RTE_FLOW_ACTION_TYPE_RSS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; - } - - if (rss->queue_num <= 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Queue region size can't be 0 or 1."); - return -rte_errno; - } - - /* check if queue index for queue region is continuous */ - for (i = 0; i < rss->queue_num - 1; i++) { - if (rss->queue[i + 1] != rss->queue[i] + 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Discontinuous queue region"); - return -rte_errno; - } - } - - if (rss->queue[rss->queue_num - 1] >= ad->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid queue region indexes."); - return -rte_errno; - } - - if (!(rte_is_power_of_2(rss->queue_num) && - rss->queue_num <= IAVF_FDIR_MAX_QREGION_SIZE)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "The region size should be any of the following values:" - "1, 2, 4, 8, 16, 32, 64, 128 as long as the total number " - "of queues do not exceed the VSI allocation."); - return -rte_errno; - } - - if (rss->queue_num > vf->max_rss_qregion) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "The region size cannot be large than the supported max RSS queue region"); - return -rte_errno; - } - - filter_action->act_conf.queue.index = rss->queue[0]; - filter_action->act_conf.queue.region = rte_fls_u32(rss->queue_num) - 1; - - return 0; -} - -static int -iavf_fdir_parse_action(struct iavf_adapter *ad, - const struct rte_flow_action actions[], - struct rte_flow_error *error, - struct iavf_fdir_conf *filter) -{ - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark_spec = NULL; - uint32_t dest_num = 0; - uint32_t mark_num = 0; - int ret; - - int number = 0; - struct virtchnl_filter_action *filter_action; - - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - - case RTE_FLOW_ACTION_TYPE_PASSTHRU: - dest_num++; - - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_PASSTHRU; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_DROP: - dest_num++; - - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_DROP; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - - act_q = actions->conf; - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_QUEUE; - filter_action->act_conf.queue.index = act_q->index; - - if (filter_action->act_conf.queue.index >= - ad->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid queue for FDIR."); - return -rte_errno; - } - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_RSS: - dest_num++; - - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_Q_REGION; - - ret = iavf_fdir_parse_action_qregion(ad, - error, actions, filter_action); - if (ret) - return ret; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - case RTE_FLOW_ACTION_TYPE_MARK: - mark_num++; - - filter->mark_flag = 1; - mark_spec = actions->conf; - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - - filter_action->type = VIRTCHNL_ACTION_MARK; - filter_action->act_conf.mark_id = mark_spec->id; - - filter->add_fltr.rule_cfg.action_set.count = ++number; - break; - - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action."); - return -rte_errno; - } - } - - if (number > VIRTCHNL_MAX_NUM_ACTIONS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Action numbers exceed the maximum value"); - return -rte_errno; - } - - if (dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - - if (mark_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Too many mark actions"); - return -rte_errno; - } - - if (dest_num + mark_num == 0) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Empty action"); - return -rte_errno; - } - - /* Mark only is equal to mark + passthru. */ - if (dest_num == 0) { - filter_action = &filter->add_fltr.rule_cfg.action_set.actions[number]; - filter_action->type = VIRTCHNL_ACTION_PASSTHRU; - filter->add_fltr.rule_cfg.action_set.count = ++number; - } - - return 0; -} - static bool iavf_fdir_refine_input_set(const uint64_t input_set, const uint64_t input_set_mask, @@ -1587,6 +1389,145 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad, return 0; } +static int +iavf_fdir_action_check_qregion(struct iavf_adapter *ad, + const struct rte_flow_action_rss *rss, + struct rte_flow_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); + + if (rss->queue_num <= 1) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Queue region size can't be 0 or 1."); + } + + if (rss->queue[rss->queue_num - 1] >= ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Invalid queue region indexes."); + } + + if (!(rte_is_power_of_2(rss->queue_num) && + rss->queue_num <= IAVF_FDIR_MAX_QREGION_SIZE)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "The region size should be any of the following values:" + "1, 2, 4, 8, 16, 32, 64, 128 as long as the total number " + "of queues do not exceed the VSI allocation."); + } + + if (rss->queue_num > vf->max_rss_qregion) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "The region size cannot be large than the supported max RSS queue region"); + } + + return 0; +} + +static int +iavf_fdir_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct iavf_adapter *ad = param->driver_ctx; + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); + struct iavf_fdir_conf *filter = &vf->fdir.conf; + uint32_t dest_num = 0, mark_num = 0; + size_t i, number = 0; + bool has_drop = false; + int ret; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *act = actions->actions[i]; + struct virtchnl_filter_action *filter_action = + &filter->add_fltr.rule_cfg.action_set.actions[number]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_PASSTHRU: + dest_num++; + + filter_action->type = VIRTCHNL_ACTION_PASSTHRU; + break; + case RTE_FLOW_ACTION_TYPE_DROP: + dest_num++; + has_drop = true; + + filter_action->type = VIRTCHNL_ACTION_DROP; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q; + dest_num++; + + act_q = act->conf; + + filter_action->type = VIRTCHNL_ACTION_QUEUE; + + if (act_q->index >= ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Invalid queue index."); + } + filter_action->act_conf.queue.index = act_q->index; + + break; + } + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = act->conf; + dest_num++; + + filter_action->type = VIRTCHNL_ACTION_Q_REGION; + + ret = iavf_fdir_action_check_qregion(ad, rss, error); + if (ret) + return ret; + + filter_action->act_conf.queue.index = rss->queue[0]; + filter_action->act_conf.queue.region = rte_fls_u32(rss->queue_num) - 1; + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + const struct rte_flow_action_mark *mark_spec; + mark_num++; + + filter->mark_flag = 1; + mark_spec = act->conf; + + filter_action->type = VIRTCHNL_ACTION_MARK; + filter_action->act_conf.mark_id = mark_spec->id; + + break; + } + default: + /* cannot happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Invalid action."); + } + filter->add_fltr.rule_cfg.action_set.count = ++number; + } + + if (dest_num > 1 || mark_num > 1 || (has_drop && mark_num > 1)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Unsupported action combination"); + } + + /* Mark only is equal to mark + passthru. */ + if (dest_num == 0) { + struct virtchnl_filter_action *filter_action = + &filter->add_fltr.rule_cfg.action_set.actions[number]; + filter_action->type = VIRTCHNL_ACTION_PASSTHRU; + filter->add_fltr.rule_cfg.action_set.count = ++number; + } + + return 0; +} + static int iavf_fdir_parse(struct iavf_adapter *ad, struct iavf_pattern_match_item *array, @@ -1597,6 +1538,20 @@ iavf_fdir_parse(struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_PASSTHRU, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + .check = iavf_fdir_parse_action_check, + .driver_ctx = ad + }; struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); struct iavf_fdir_conf *filter = &vf->fdir.conf; struct iavf_pattern_match_item *item = NULL; @@ -1608,6 +1563,10 @@ iavf_fdir_parse(struct iavf_adapter *ad, if (ret) return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + item = iavf_search_pattern_match_item(pattern, array, array_len, error); if (!item) return -rte_errno; @@ -1617,10 +1576,6 @@ iavf_fdir_parse(struct iavf_adapter *ad, if (ret) goto error; - ret = iavf_fdir_parse_action(ad, actions, error, filter); - if (ret) - goto error; - if (meta) *meta = filter; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 23/29] net/iavf: use common action checks for fsub 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (21 preceding siblings ...) 2026-04-10 13:13 ` [PATCH v3 22/29] net/iavf: use common action checks for FDIR Anatoly Burakov @ 2026-04-10 13:22 ` Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 24/29] net/iavf: use common action checks for flow query Anatoly Burakov ` (5 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:22 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow action checking parsing infrastructure for checking flow actions for flow subscription filter. Existing implementation had a couple issues that do not rise to the level of being bugs, but are still questionable design choices. For one, DROP action is supported in actions check (single actions are allowed as long as it isn't RSS or QUEUE) but is later disallowed in action parse (because not having port representor action is treated as an error). This is fixed by removing DROP action support from the check stage. For another, PORT_REPRESENTOR action was incrementing action counter but not writing anything into action array, which, given that action list is zero-initialized, meant that the default action (drop) was kept in the action list. However, because the actual PF treats drop as a no-op, nothing bad happened when a DROP action was added to the list of actions. However, nothing bad also happens if we just didn't have an action to begin with, so we remedy these unorthodox semantics by accordingly treating PORT_REPRESENTOR action as a noop, and not adding anything to the action list. As a final note, now that all filter parsing code paths use the common action check infrastructure, we can remove the NULL check for actions from the beginning of the parsing path, as this is now handled by each engine. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_fsub.c | 248 +++++++++------------ drivers/net/intel/iavf/iavf_generic_flow.c | 7 - 2 files changed, 105 insertions(+), 150 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_fsub.c b/drivers/net/intel/iavf/iavf_fsub.c index 010c1d5a44..4131937cc7 100644 --- a/drivers/net/intel/iavf/iavf_fsub.c +++ b/drivers/net/intel/iavf/iavf_fsub.c @@ -540,89 +540,46 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[], } static int -iavf_fsub_parse_action(struct iavf_adapter *ad, - const struct rte_flow_action *actions, +iavf_fsub_parse_action(const struct ci_flow_actions *actions, uint32_t priority, struct rte_flow_error *error, struct iavf_fsub_conf *filter) { - const struct rte_flow_action *action; - const struct rte_flow_action_ethdev *act_ethdev; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_rss *act_qgrop; - struct virtchnl_filter_action *filter_action; - uint16_t valid_qgrop_number[MAX_QGRP_NUM_TYPE] = { - 2, 4, 8, 16, 32, 64, 128}; - uint16_t i, num = 0, dest_num = 0, vf_num = 0; - uint16_t rule_port_id; + uint16_t num_actions = 0; + size_t i; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *action = actions->actions[i]; + struct virtchnl_filter_action *filter_action = + &filter->sub_fltr.actions.actions[num_actions]; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { switch (action->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - vf_num++; - filter_action = &filter->sub_fltr.actions.actions[num]; - - act_ethdev = action->conf; - rule_port_id = ad->dev_data->port_id; - if (rule_port_id != act_ethdev->port_id) - goto error1; - - filter->sub_fltr.actions.count = ++num; - break; + /* nothing to be done, but skip the action */ + continue; case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - filter_action = &filter->sub_fltr.actions.actions[num]; - - act_q = action->conf; - if (act_q->index >= ad->dev_data->nb_rx_queues) - goto error2; - + { + const struct rte_flow_action_queue *act_q = action->conf; filter_action->type = VIRTCHNL_ACTION_QUEUE; filter_action->act_conf.queue.index = act_q->index; - filter->sub_fltr.actions.count = ++num; break; + } case RTE_FLOW_ACTION_TYPE_RSS: - dest_num++; - filter_action = &filter->sub_fltr.actions.actions[num]; - - act_qgrop = action->conf; - if (act_qgrop->queue_num <= 1) - goto error2; + { + const struct rte_flow_action_rss *act_qgrp = action->conf; filter_action->type = VIRTCHNL_ACTION_Q_REGION; - filter_action->act_conf.queue.index = - act_qgrop->queue[0]; - for (i = 0; i < MAX_QGRP_NUM_TYPE; i++) { - if (act_qgrop->queue_num == - valid_qgrop_number[i]) - break; - } - - if (i == MAX_QGRP_NUM_TYPE) - goto error2; - - if ((act_qgrop->queue[0] + act_qgrop->queue_num) > - ad->dev_data->nb_rx_queues) - goto error3; - - for (i = 0; i < act_qgrop->queue_num - 1; i++) - if (act_qgrop->queue[i + 1] != - act_qgrop->queue[i] + 1) - goto error4; - - filter_action->act_conf.queue.region = act_qgrop->queue_num; - filter->sub_fltr.actions.count = ++num; + filter_action->act_conf.queue.index = act_qgrp->queue[0]; + filter_action->act_conf.queue.region = act_qgrp->queue_num; break; + } default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid action type"); - return -rte_errno; + /* cannot happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type."); } + filter->sub_fltr.actions.count = ++num_actions; } /* 0 denotes lowest priority of recipe and highest priority @@ -630,91 +587,86 @@ iavf_fsub_parse_action(struct iavf_adapter *ad, */ filter->sub_fltr.priority = priority; - if (num > VIRTCHNL_MAX_NUM_ACTIONS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Action numbers exceed the maximum value"); - return -rte_errno; - } - - if (vf_num == 0) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action, vf action must be added"); - return -rte_errno; - } - - if (dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - return 0; - -error1: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid port id"); - return -rte_errno; - -error2: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action type or queue number"); - return -rte_errno; - -error3: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid queue region indexes"); - return -rte_errno; - -error4: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Discontinuous queue region"); - return -rte_errno; } static int -iavf_fsub_check_action(const struct rte_flow_action *actions, +iavf_fsub_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, struct rte_flow_error *error) { - const struct rte_flow_action *action; - enum rte_flow_action_type action_type; - uint16_t actions_num = 0; - bool vf_valid = false; - bool queue_valid = false; + const struct iavf_adapter *ad = param->driver_ctx; + bool vf = false; + bool queue = false; + size_t i; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { + /* + * allowed action types: + * 1. PORT_REPRESENTOR only + * 2. PORT_REPRESENTOR + QUEUE/RSS + */ + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *action = actions->actions[i]; + switch (action->type) { case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - vf_valid = true; - actions_num++; + { + const struct rte_flow_action_ethdev *act_ethdev = action->conf; + + if (act_ethdev->port_id != ad->dev_data->port_id) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_ethdev, + "Invalid port id"); + } + vf = true; break; + } case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *act_qgrp = action->conf; + + /* must be between 2 and 128 and be a power of 2 */ + if (act_qgrp->queue_num < 2 || act_qgrp->queue_num > 128 || + !rte_is_power_of_2(act_qgrp->queue_num)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_qgrp, + "Invalid number of queues in RSS queue group"); + } + /* last queue must not exceed total number of queues */ + if (act_qgrp->queue[0] + act_qgrp->queue_num > ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_qgrp, + "Invalid queue index in RSS queue group"); + } + + queue = true; + break; + } case RTE_FLOW_ACTION_TYPE_QUEUE: - queue_valid = true; - actions_num++; + { + const struct rte_flow_action_queue *act_q = action->conf; + + if (act_q->index >= ad->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, act_q, + "Invalid queue index"); + } + + queue = true; break; - case RTE_FLOW_ACTION_TYPE_DROP: - actions_num++; - break; - case RTE_FLOW_ACTION_TYPE_VOID: - continue; + } default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid action type"); - return -rte_errno; + /* shouldn't happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type"); } } - - if (!((actions_num == 1 && !queue_valid) || - (actions_num == 2 && vf_valid && queue_valid))) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, "Invalid action number"); - return -rte_errno; + /* QUEUE/RSS must be accompanied by PORT_REPRESENTOR */ + if (queue != vf) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "Invalid action combination"); } return 0; @@ -730,6 +682,18 @@ iavf_fsub_parse(struct iavf_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 2, + .check = iavf_fsub_action_check, + .driver_ctx = ad, + }; struct iavf_fsub_conf *filter; struct iavf_pattern_match_item *pattern_match_item = NULL; struct ci_flow_attr_check_param attr_param = { @@ -749,6 +713,10 @@ iavf_fsub_parse(struct iavf_adapter *ad, if (ret) goto error; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + goto error; + if (attr->priority > 1) { rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, @@ -772,14 +740,8 @@ iavf_fsub_parse(struct iavf_adapter *ad, if (ret) goto error; - /* check flow subscribe pattern action */ - ret = iavf_fsub_check_action(actions, error); - if (ret) - goto error; - /* parse flow subscribe pattern action */ - ret = iavf_fsub_parse_action((void *)ad, actions, attr->priority, - error, filter); + ret = iavf_fsub_parse_action(&parsed_actions, attr->priority, error, filter); error: if (!ret && meta) diff --git a/drivers/net/intel/iavf/iavf_generic_flow.c b/drivers/net/intel/iavf/iavf_generic_flow.c index b8f6414b16..022caf5fe2 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.c +++ b/drivers/net/intel/iavf/iavf_generic_flow.c @@ -2153,13 +2153,6 @@ iavf_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - *engine = iavf_parse_engine(ad, flow, &vf->rss_parser_list, attr, pattern, actions, error); if (*engine) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 24/29] net/iavf: use common action checks for flow query 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (22 preceding siblings ...) 2026-04-10 13:22 ` [PATCH v3 23/29] net/iavf: use common action checks for fsub Anatoly Burakov @ 2026-04-10 13:22 ` Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 25/29] net/ice: use common flow attribute checks Anatoly Burakov ` (4 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:22 UTC (permalink / raw) To: dev, Vladimir Medvedkin Use the common flow parsing infrastructure to validate query actions. Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com> --- drivers/net/intel/iavf/iavf_generic_flow.c | 31 +++++++++++----------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/drivers/net/intel/iavf/iavf_generic_flow.c b/drivers/net/intel/iavf/iavf_generic_flow.c index 022caf5fe2..8925449a5e 100644 --- a/drivers/net/intel/iavf/iavf_generic_flow.c +++ b/drivers/net/intel/iavf/iavf_generic_flow.c @@ -17,6 +17,7 @@ #include "iavf.h" #include "iavf_generic_flow.h" +#include "../common/flow_check.h" static struct iavf_engine_list engine_list = TAILQ_HEAD_INITIALIZER(engine_list); @@ -2332,10 +2333,18 @@ iavf_flow_query(struct rte_eth_dev *dev, void *data, struct rte_flow_error *error) { - int ret = -EINVAL; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]) { + RTE_FLOW_ACTION_TYPE_COUNT, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1 + }; struct iavf_adapter *ad = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct rte_flow_query_count *count = data; + int ret; if (!iavf_flow_is_valid(flow) || !flow->engine->query_count) { rte_flow_error_set(error, EINVAL, @@ -2344,19 +2353,9 @@ iavf_flow_query(struct rte_eth_dev *dev, return -rte_errno; } - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_COUNT: - ret = flow->engine->query_count(ad, flow, count, error); - break; - default: - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "action not supported"); - } - } - return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret < 0) + return ret; + + return flow->engine->query_count(ad, flow, count, error); } -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 25/29] net/ice: use common flow attribute checks 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (23 preceding siblings ...) 2026-04-10 13:22 ` [PATCH v3 24/29] net/iavf: use common action checks for flow query Anatoly Burakov @ 2026-04-10 13:22 ` Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 26/29] net/ice: use common action checks for hash Anatoly Burakov ` (3 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:22 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Replace custom attr checks with a call to common checks. Switch engine supports priority (0 or 1) but other engines don't, so we move the attribute checks into the engines. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_acl_filter.c | 9 ++-- drivers/net/intel/ice/ice_fdir_filter.c | 8 ++- drivers/net/intel/ice/ice_generic_flow.c | 59 +++-------------------- drivers/net/intel/ice/ice_generic_flow.h | 2 +- drivers/net/intel/ice/ice_hash.c | 11 +++-- drivers/net/intel/ice/ice_switch_filter.c | 22 +++++++-- 6 files changed, 48 insertions(+), 63 deletions(-) diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c index 6754a40044..0421578b32 100644 --- a/drivers/net/intel/ice/ice_acl_filter.c +++ b/drivers/net/intel/ice/ice_acl_filter.c @@ -27,6 +27,8 @@ #include "ice_generic_flow.h" #include "base/ice_flow.h" +#include "../common/flow_check.h" + #define MAX_ACL_SLOTS_ID 2048 #define ICE_ACL_INSET_ETH_IPV4 ( \ @@ -970,7 +972,7 @@ ice_acl_parse(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -980,8 +982,9 @@ ice_acl_parse(struct ice_adapter *ad, uint64_t input_set; int ret; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; memset(filter, 0, sizeof(*filter)); item = ice_search_pattern_match_item(ad, pattern, array, array_len, diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c index b8e77373a3..9e26352b81 100644 --- a/drivers/net/intel/ice/ice_fdir_filter.c +++ b/drivers/net/intel/ice/ice_fdir_filter.c @@ -15,6 +15,8 @@ #include "ice_rxtx.h" #include "ice_generic_flow.h" +#include "../common/flow_check.h" + #define ICE_FDIR_IPV6_TC_OFFSET 20 #define ICE_IPV6_TC_MASK (0xFF << ICE_FDIR_IPV6_TC_OFFSET) @@ -2804,7 +2806,7 @@ ice_fdir_parse(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority __rte_unused, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -2815,6 +2817,10 @@ ice_fdir_parse(struct ice_adapter *ad, bool raw = false; int ret; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + memset(filter, 0, sizeof(*filter)); item = ice_search_pattern_match_item(ad, pattern, array, array_len, error); diff --git a/drivers/net/intel/ice/ice_generic_flow.c b/drivers/net/intel/ice/ice_generic_flow.c index 62f0c334a1..fe903a975c 100644 --- a/drivers/net/intel/ice/ice_generic_flow.c +++ b/drivers/net/intel/ice/ice_generic_flow.c @@ -16,6 +16,7 @@ #include <rte_malloc.h> #include <rte_tailq.h> +#include "../common/flow_check.h" #include "ice_ethdev.h" #include "ice_generic_flow.h" @@ -1959,7 +1960,7 @@ enum rte_flow_item_type pattern_eth_ipv6_udp_l2tpv2_ppp_ipv6_tcp[] = { typedef bool (*parse_engine_t)(struct ice_adapter *ad, struct rte_flow *flow, struct ice_flow_parser *parser, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error); @@ -2045,44 +2046,6 @@ ice_flow_uninit(struct ice_adapter *ad) } } -static int -ice_flow_valid_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error) -{ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "Not support egress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->transfer) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, - attr, "Not support transfer."); - return -rte_errno; - } - - if (attr->priority > 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "Only support priority 0 and 1."); - return -rte_errno; - } - - return 0; -} - /* Find the first VOID or non-VOID item pointer */ static const struct rte_flow_item * ice_find_first_item(const struct rte_flow_item *item, bool is_void) @@ -2360,7 +2323,7 @@ static bool ice_parse_engine_create(struct ice_adapter *ad, struct rte_flow *flow, struct ice_flow_parser *parser, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2378,7 +2341,7 @@ ice_parse_engine_create(struct ice_adapter *ad, if (parser->parse_pattern_action(ad, parser->array, parser->array_len, - pattern, actions, priority, &meta, error) < 0) + pattern, actions, attr, &meta, error) < 0) return false; RTE_ASSERT(parser->engine->create != NULL); @@ -2390,7 +2353,7 @@ static bool ice_parse_engine_validate(struct ice_adapter *ad, struct rte_flow *flow __rte_unused, struct ice_flow_parser *parser, - uint32_t priority, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], struct rte_flow_error *error) @@ -2407,7 +2370,7 @@ ice_parse_engine_validate(struct ice_adapter *ad, return parser->parse_pattern_action(ad, parser->array, parser->array_len, - pattern, actions, priority, + pattern, actions, attr, NULL, error) >= 0; } @@ -2435,7 +2398,6 @@ ice_flow_process_filter(struct rte_eth_dev *dev, parse_engine_t ice_parse_engine, struct rte_flow_error *error) { - int ret = ICE_ERR_NOT_SUPPORTED; struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ice_flow_parser *parser; @@ -2460,15 +2422,10 @@ ice_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - ret = ice_flow_valid_attr(attr, error); - if (ret) - return ret; - *engine = NULL; /* always try hash engine first */ if (ice_parse_engine(ad, flow, &ice_hash_parser, - attr->priority, pattern, - actions, error)) { + attr, pattern, actions, error)) { *engine = ice_hash_parser.engine; return 0; } @@ -2489,7 +2446,7 @@ ice_flow_process_filter(struct rte_eth_dev *dev, return -rte_errno; } - if (ice_parse_engine(ad, flow, parser, attr->priority, + if (ice_parse_engine(ad, flow, parser, attr, pattern, actions, error)) { *engine = parser->engine; return 0; diff --git a/drivers/net/intel/ice/ice_generic_flow.h b/drivers/net/intel/ice/ice_generic_flow.h index c0eed84610..adb52d5bdf 100644 --- a/drivers/net/intel/ice/ice_generic_flow.h +++ b/drivers/net/intel/ice/ice_generic_flow.h @@ -525,7 +525,7 @@ typedef int (*parse_pattern_action_t)(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c index 77829e607b..bd42bc2a4a 100644 --- a/drivers/net/intel/ice/ice_hash.c +++ b/drivers/net/intel/ice/ice_hash.c @@ -26,6 +26,8 @@ #include "ice_ethdev.h" #include "ice_generic_flow.h" +#include "../common/flow_check.h" + #define ICE_PHINT_NONE 0 #define ICE_PHINT_VLAN BIT_ULL(0) #define ICE_PHINT_PPPOE BIT_ULL(1) @@ -107,7 +109,7 @@ ice_hash_parse_pattern_action(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error); @@ -1185,7 +1187,7 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1194,8 +1196,9 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, struct ice_rss_meta *rss_meta_ptr; uint64_t phint = ICE_PHINT_NONE; - if (priority >= 1) - return -rte_errno; + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c index b25e5eaad3..d8c0e7c59c 100644 --- a/drivers/net/intel/ice/ice_switch_filter.c +++ b/drivers/net/intel/ice/ice_switch_filter.c @@ -26,6 +26,7 @@ #include "ice_generic_flow.h" #include "ice_dcf_ethdev.h" +#include "../common/flow_check.h" #define MAX_QGRP_NUM_TYPE 7 #define MAX_INPUT_SET_BYTE 32 @@ -1768,7 +1769,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, uint32_t array_len, const struct rte_flow_item pattern[], const struct rte_flow_action actions[], - uint32_t priority, + const struct rte_flow_attr *attr, void **meta, struct rte_flow_error *error) { @@ -1784,6 +1785,21 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, enum ice_sw_tunnel_type tun_type = ICE_NON_TUN; struct ice_pattern_match_item *pattern_match_item = NULL; + struct ci_flow_attr_check_param attr_param = { + .allow_priority = true, + }; + + ret = ci_flow_check_attr(attr, &attr_param, error); + if (ret) + return ret; + + /* Allow only two priority values - 0 or 1 */ + if (attr->priority > 1) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, NULL, + "Invalid priority for switch filter"); + return -rte_errno; + } for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { item_num++; @@ -1859,10 +1875,10 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, goto error; if (ad->hw.dcf_enabled) - ret = ice_switch_parse_dcf_action((void *)ad, actions, priority, + ret = ice_switch_parse_dcf_action((void *)ad, actions, attr->priority, error, &rule_info); else - ret = ice_switch_parse_action(pf, actions, priority, error, + ret = ice_switch_parse_action(pf, actions, attr->priority, error, &rule_info); if (ret) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 26/29] net/ice: use common action checks for hash 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (24 preceding siblings ...) 2026-04-10 13:22 ` [PATCH v3 25/29] net/ice: use common flow attribute checks Anatoly Burakov @ 2026-04-10 13:22 ` Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 27/29] net/ice: use common action checks for FDIR Anatoly Burakov ` (2 subsequent siblings) 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:22 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for hash filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_hash.c | 178 +++++++++++++++++-------------- 1 file changed, 95 insertions(+), 83 deletions(-) diff --git a/drivers/net/intel/ice/ice_hash.c b/drivers/net/intel/ice/ice_hash.c index bd42bc2a4a..40bac92f8a 100644 --- a/drivers/net/intel/ice/ice_hash.c +++ b/drivers/net/intel/ice/ice_hash.c @@ -1090,94 +1090,92 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func, } static int -ice_hash_parse_action(struct ice_pattern_match_item *pattern_match_item, - const struct rte_flow_action actions[], +ice_hash_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param __rte_unused, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss; + + rss = actions->actions[0]->conf; + + switch (rss->func) { + case RTE_ETH_HASH_FUNCTION_DEFAULT: + case RTE_ETH_HASH_FUNCTION_TOEPLITZ: + case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR: + case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ: + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Selected RSS hash function not supported"); + } + + if (rss->level) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "a nonzero RSS encapsulation level is not supported"); + + if (rss->key_len) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "a nonzero RSS key_len is not supported"); + + if (rss->queue) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "a non-NULL RSS queue is not supported"); + + return 0; +} + +static int +ice_hash_parse_rss_action(struct ice_pattern_match_item *pattern_match_item, + const struct rte_flow_action_rss *rss, uint64_t pattern_hint, struct ice_rss_meta *rss_meta, struct rte_flow_error *error) { struct ice_rss_hash_cfg *cfg = pattern_match_item->meta; - enum rte_flow_action_type action_type; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action; uint64_t rss_type; + bool symm = false; - /* Supported action is RSS. */ - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - rss = action->conf; - rss_type = rss->types; - - /* Check hash function and save it to rss_meta. */ - if (pattern_match_item->pattern_list != - pattern_empty && rss->func == - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) { - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "Not supported flow"); - } else if (rss->func == - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR){ - rss_meta->hash_function = - RTE_ETH_HASH_FUNCTION_SIMPLE_XOR; - return 0; - } else if (rss->func == - RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { - rss_meta->hash_function = - RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ; - if (pattern_hint == ICE_PHINT_RAW) - rss_meta->raw.symm = true; - else - cfg->symm = true; - } - - if (rss->level) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS encapsulation level is not supported"); - - if (rss->key_len) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a nonzero RSS key_len is not supported"); - - if (rss->queue) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "a non-NULL RSS queue is not supported"); - - /* If pattern type is raw, no need to refine rss type */ - if (pattern_hint == ICE_PHINT_RAW) - break; - - /** - * Check simultaneous use of SRC_ONLY and DST_ONLY - * of the same level. - */ - rss_type = rte_eth_rss_hf_refine(rss_type); - - if (ice_any_invalid_rss_type(rss->func, rss_type, - pattern_match_item->input_set_mask_o)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - action, "RSS type not supported"); - - rss_meta->cfg = *cfg; - ice_refine_hash_cfg(&rss_meta->cfg, - rss_type, pattern_hint); - break; - case RTE_FLOW_ACTION_TYPE_END: - break; - - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, action, - "Invalid action."); - return -rte_errno; + if (rss->func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) { + if (pattern_match_item->pattern_list != pattern_empty) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "XOR hash function is only supported for empty pattern"); } + rss_meta->hash_function = RTE_ETH_HASH_FUNCTION_SIMPLE_XOR; + return 0; } + if (rss->func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) { + rss_meta->hash_function = RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ; + symm = true; + } + + /* If pattern type is raw, no need to refine rss type */ + if (pattern_hint == ICE_PHINT_RAW) { + rss_meta->raw.symm = symm; + return 0; + } + cfg->symm = symm; + + /** + * Check simultaneous use of SRC_ONLY and DST_ONLY + * of the same level. + */ + rss_type = rte_eth_rss_hf_refine(rss->types); + + if (ice_any_invalid_rss_type(rss->func, rss_type, + pattern_match_item->input_set_mask_o)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + rss, "RSS type not supported"); + + rss_meta->cfg = *cfg; + ice_refine_hash_cfg(&rss_meta->cfg, + rss_type, pattern_hint); + return 0; } @@ -1191,15 +1189,29 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, void **meta, struct rte_flow_error *error) { - int ret = 0; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_hash_parse_action_check, + }; + const struct rte_flow_action_rss *rss; struct ice_pattern_match_item *pattern_match_item; struct ice_rss_meta *rss_meta_ptr; uint64_t phint = ICE_PHINT_NONE; + int ret = 0; ret = ci_flow_check_attr(attr, NULL, error); if (ret) return ret; + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + rss_meta_ptr = rte_zmalloc(NULL, sizeof(*rss_meta_ptr), 0); if (!rss_meta_ptr) { rte_flow_error_set(error, EINVAL, @@ -1231,9 +1243,9 @@ ice_hash_parse_pattern_action(__rte_unused struct ice_adapter *ad, } } - /* Check rss action. */ - ret = ice_hash_parse_action(pattern_match_item, actions, phint, - rss_meta_ptr, error); + rss = parsed_actions.actions[0]->conf; + ret = ice_hash_parse_rss_action(pattern_match_item, rss, phint, + rss_meta_ptr, error); error: if (!ret && meta) -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 27/29] net/ice: use common action checks for FDIR 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (25 preceding siblings ...) 2026-04-10 13:22 ` [PATCH v3 26/29] net/ice: use common action checks for hash Anatoly Burakov @ 2026-04-10 13:22 ` Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 28/29] net/ice: use common action checks for switch Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 29/29] net/ice: use common action checks for ACL Anatoly Burakov 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:22 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for FDIR filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_fdir_filter.c | 375 +++++++++++++----------- 1 file changed, 203 insertions(+), 172 deletions(-) diff --git a/drivers/net/intel/ice/ice_fdir_filter.c b/drivers/net/intel/ice/ice_fdir_filter.c index 9e26352b81..dca45479fb 100644 --- a/drivers/net/intel/ice/ice_fdir_filter.c +++ b/drivers/net/intel/ice/ice_fdir_filter.c @@ -1693,177 +1693,6 @@ static struct ice_flow_engine ice_fdir_engine = { .type = ICE_FLOW_ENGINE_FDIR, }; -static int -ice_fdir_parse_action_qregion(struct ice_pf *pf, - struct rte_flow_error *error, - const struct rte_flow_action *act, - struct ice_fdir_filter_conf *filter) -{ - const struct rte_flow_action_rss *rss = act->conf; - uint32_t i; - - if (act->type != RTE_FLOW_ACTION_TYPE_RSS) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - return -rte_errno; - } - - if (rss->queue_num <= 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Queue region size can't be 0 or 1."); - return -rte_errno; - } - - /* check if queue index for queue region is continuous */ - for (i = 0; i < rss->queue_num - 1; i++) { - if (rss->queue[i + 1] != rss->queue[i] + 1) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Discontinuous queue region"); - return -rte_errno; - } - } - - if (rss->queue[rss->queue_num - 1] >= pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid queue region indexes."); - return -rte_errno; - } - - if (!(rte_is_power_of_2(rss->queue_num) && - (rss->queue_num <= ICE_FDIR_MAX_QREGION_SIZE))) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "The region size should be any of the following values:" - "1, 2, 4, 8, 16, 32, 64, 128 as long as the total number " - "of queues do not exceed the VSI allocation."); - return -rte_errno; - } - - filter->input.q_index = rss->queue[0]; - filter->input.q_region = rte_fls_u32(rss->queue_num) - 1; - filter->input.dest_ctl = ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QGROUP; - - return 0; -} - -static int -ice_fdir_parse_action(struct ice_adapter *ad, - const struct rte_flow_action actions[], - struct rte_flow_error *error, - struct ice_fdir_filter_conf *filter) -{ - struct ice_pf *pf = &ad->pf; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_mark *mark_spec = NULL; - const struct rte_flow_action_count *act_count; - uint32_t dest_num = 0; - uint32_t mark_num = 0; - uint32_t counter_num = 0; - int ret; - - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - - act_q = actions->conf; - filter->input.q_index = act_q->index; - if (filter->input.q_index >= - pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid queue for FDIR."); - return -rte_errno; - } - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; - break; - case RTE_FLOW_ACTION_TYPE_DROP: - dest_num++; - - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; - break; - case RTE_FLOW_ACTION_TYPE_PASSTHRU: - dest_num++; - - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; - break; - case RTE_FLOW_ACTION_TYPE_RSS: - dest_num++; - - ret = ice_fdir_parse_action_qregion(pf, - error, actions, filter); - if (ret) - return ret; - break; - case RTE_FLOW_ACTION_TYPE_MARK: - mark_num++; - filter->mark_flag = 1; - mark_spec = actions->conf; - filter->input.fltr_id = mark_spec->id; - filter->input.fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ONE; - break; - case RTE_FLOW_ACTION_TYPE_COUNT: - counter_num++; - - act_count = actions->conf; - filter->input.cnt_ena = ICE_FXD_FLTR_QW0_STAT_ENA_PKTS; - rte_memcpy(&filter->act_count, act_count, - sizeof(filter->act_count)); - - break; - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action."); - return -rte_errno; - } - } - - if (dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - - if (mark_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Too many mark actions"); - return -rte_errno; - } - - if (counter_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Too many count actions"); - return -rte_errno; - } - - if (dest_num + mark_num + counter_num == 0) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Empty action"); - return -rte_errno; - } - - /* set default action to PASSTHRU mode, in "mark/count only" case. */ - if (dest_num == 0) - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; - - return 0; -} static int ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, @@ -2800,6 +2629,188 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad, return 0; } +static int +ice_fdir_parse_action(struct ice_adapter *ad, + const struct ci_flow_actions *actions, + struct rte_flow_error *error) +{ + struct ice_pf *pf = &ad->pf; + struct ice_fdir_filter_conf *filter = &pf->fdir.conf; + bool dest_set = false; + size_t i; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *act = actions->actions[i]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + dest_set = true; + + filter->input.q_index = act_q->index; + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; + break; + } + case RTE_FLOW_ACTION_TYPE_DROP: + dest_set = true; + + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; + break; + case RTE_FLOW_ACTION_TYPE_PASSTHRU: + dest_set = true; + + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; + break; + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = act->conf; + dest_set = true; + + filter->input.q_index = rss->queue[0]; + filter->input.q_region = rte_fls_u32(rss->queue_num) - 1; + filter->input.dest_ctl = ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QGROUP; + + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + const struct rte_flow_action_mark *mark_spec = act->conf; + filter->mark_flag = 1; + filter->input.fltr_id = mark_spec->id; + filter->input.fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ONE; + break; + } + case RTE_FLOW_ACTION_TYPE_COUNT: + { + const struct rte_flow_action_count *act_count = act->conf; + + filter->input.cnt_ena = ICE_FXD_FLTR_QW0_STAT_ENA_PKTS; + rte_memcpy(&filter->act_count, act_count, + sizeof(filter->act_count)); + break; + } + default: + /* Should not happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + } + + /* set default action to PASSTHRU mode, in "mark/count only" case. */ + if (!dest_set) { + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_OTHER; + } + + return 0; +} + +static int +ice_fdir_check_action_qregion(struct ice_pf *pf, + struct rte_flow_error *error, + const struct rte_flow_action_rss *rss) +{ + if (rss->queue_num <= 1) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Queue region size can't be 0 or 1."); + } + + if (rss->queue[rss->queue_num - 1] >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "Invalid queue region indexes."); + } + + if (!(rte_is_power_of_2(rss->queue_num) && + (rss->queue_num <= ICE_FDIR_MAX_QREGION_SIZE))) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, rss, + "The region size should be any of the following values:" + "2, 4, 8, 16, 32, 64, 128 as long as the total number " + "of queues do not exceed the VSI allocation."); + + return 0; +} + +static int +ice_fdir_check_action(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct ice_adapter *ad = param->driver_ctx; + struct ice_pf *pf = &ad->pf; + uint32_t dest_num = 0; + uint32_t mark_num = 0; + uint32_t counter_num = 0; + size_t i; + int ret; + + for (i = 0; i < actions->count; i++) { + const struct rte_flow_action *act = actions->actions[i]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + dest_num++; + + if (act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid queue for FDIR."); + } + break; + } + case RTE_FLOW_ACTION_TYPE_DROP: + dest_num++; + break; + case RTE_FLOW_ACTION_TYPE_PASSTHRU: + dest_num++; + break; + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *rss = act->conf; + + dest_num++; + ret = ice_fdir_check_action_qregion(pf, error, rss); + if (ret) + return ret; + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + mark_num++; + break; + } + case RTE_FLOW_ACTION_TYPE_COUNT: + { + counter_num++; + break; + } + default: + /* Should not happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + } + + if (dest_num > 1 || mark_num > 1 || counter_num > 1) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Unsupported action combination"); + } + + return 0; +} + static int ice_fdir_parse(struct ice_adapter *ad, struct ice_pattern_match_item *array, @@ -2810,6 +2821,21 @@ ice_fdir_parse(struct ice_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_PASSTHRU, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_MARK, + RTE_FLOW_ACTION_TYPE_COUNT, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 3, + .check = ice_fdir_check_action, + .driver_ctx = ad, + }; struct ice_pf *pf = &ad->pf; struct ice_fdir_filter_conf *filter = &pf->fdir.conf; struct ice_pattern_match_item *item = NULL; @@ -2822,6 +2848,11 @@ ice_fdir_parse(struct ice_adapter *ad, return ret; memset(filter, 0, sizeof(*filter)); + + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + item = ice_search_pattern_match_item(ad, pattern, array, array_len, error); @@ -2850,7 +2881,7 @@ ice_fdir_parse(struct ice_adapter *ad, goto error; } - ret = ice_fdir_parse_action(ad, actions, error, filter); + ret = ice_fdir_parse_action(ad, &parsed_actions, error); if (ret) goto error; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 28/29] net/ice: use common action checks for switch 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (26 preceding siblings ...) 2026-04-10 13:22 ` [PATCH v3 27/29] net/ice: use common action checks for FDIR Anatoly Burakov @ 2026-04-10 13:22 ` Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 29/29] net/ice: use common action checks for ACL Anatoly Burakov 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:22 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for switch filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_switch_filter.c | 370 +++++++++++----------- 1 file changed, 184 insertions(+), 186 deletions(-) diff --git a/drivers/net/intel/ice/ice_switch_filter.c b/drivers/net/intel/ice/ice_switch_filter.c index d8c0e7c59c..9a46e3b413 100644 --- a/drivers/net/intel/ice/ice_switch_filter.c +++ b/drivers/net/intel/ice/ice_switch_filter.c @@ -35,6 +35,8 @@ #define ICE_IPV4_PROTO_NVGRE 0x002F #define ICE_SW_PRI_BASE 6 +#define ICE_SW_MAX_QUEUES 128 + #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) #define ICE_SW_INSET_MAC_VLAN ( \ @@ -1527,85 +1529,38 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], } static int -ice_switch_parse_dcf_action(struct ice_dcf_adapter *ad, - const struct rte_flow_action *actions, +ice_switch_parse_dcf_action(const struct rte_flow_action *action, uint32_t priority, struct rte_flow_error *error, struct ice_adv_rule_info *rule_info) { const struct rte_flow_action_ethdev *act_ethdev; - const struct rte_flow_action *action; const struct rte_eth_dev *repr_dev; enum rte_flow_action_type action_type; - uint16_t rule_port_id, backer_port_id; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; - act_ethdev = action->conf; - - if (!rte_eth_dev_is_valid_port(act_ethdev->port_id)) - goto invalid_port_id; - - /* For traffic to original DCF port */ - rule_port_id = ad->parent.pf.dev_data->port_id; - - if (rule_port_id != act_ethdev->port_id) - goto invalid_port_id; - - rule_info->sw_act.vsi_handle = 0; - - break; - -invalid_port_id: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid port_id"); - return -rte_errno; - - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; - act_ethdev = action->conf; - - if (!rte_eth_dev_is_valid_port(act_ethdev->port_id)) - goto invalid; - - repr_dev = &rte_eth_devices[act_ethdev->port_id]; - - if (!repr_dev->data) - goto invalid; - - rule_port_id = ad->parent.pf.dev_data->port_id; - backer_port_id = repr_dev->data->backer_port_id; - - if (backer_port_id != rule_port_id) - goto invalid; - - rule_info->sw_act.vsi_handle = repr_dev->data->representor_id; - break; - -invalid: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid ethdev_port_id"); - return -rte_errno; - - case RTE_FLOW_ACTION_TYPE_DROP: - rule_info->sw_act.fltr_act = ICE_DROP_PACKET; - break; - - default: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action type"); - return -rte_errno; - } + action_type = action->type; + switch (action_type) { + case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: + rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; + rule_info->sw_act.vsi_handle = 0; + break; + + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + rule_info->sw_act.fltr_act = ICE_FWD_TO_VSI; + act_ethdev = action->conf; + repr_dev = &rte_eth_devices[act_ethdev->port_id]; + rule_info->sw_act.vsi_handle = repr_dev->data->representor_id; + break; + + case RTE_FLOW_ACTION_TYPE_DROP: + rule_info->sw_act.fltr_act = ICE_DROP_PACKET; + break; + + default: + /* Should never reach */ + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Invalid action type"); + return -rte_errno; } rule_info->sw_act.src = rule_info->sw_act.vsi_handle; @@ -1621,73 +1576,38 @@ ice_switch_parse_dcf_action(struct ice_dcf_adapter *ad, static int ice_switch_parse_action(struct ice_pf *pf, - const struct rte_flow_action *actions, + const struct rte_flow_action *action, uint32_t priority, struct rte_flow_error *error, struct ice_adv_rule_info *rule_info) { struct ice_vsi *vsi = pf->main_vsi; - struct rte_eth_dev_data *dev_data = pf->adapter->pf.dev_data; const struct rte_flow_action_queue *act_q; const struct rte_flow_action_rss *act_qgrop; - uint16_t base_queue, i; - const struct rte_flow_action *action; + uint16_t base_queue; enum rte_flow_action_type action_type; - uint16_t valid_qgrop_number[MAX_QGRP_NUM_TYPE] = { - 2, 4, 8, 16, 32, 64, 128}; base_queue = pf->base_queue + vsi->base_queue; - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - act_qgrop = action->conf; - if (act_qgrop->queue_num <= 1) - goto error; - rule_info->sw_act.fltr_act = - ICE_FWD_TO_QGRP; - rule_info->sw_act.fwd_id.q_id = - base_queue + act_qgrop->queue[0]; - for (i = 0; i < MAX_QGRP_NUM_TYPE; i++) { - if (act_qgrop->queue_num == - valid_qgrop_number[i]) - break; - } - if (i == MAX_QGRP_NUM_TYPE) - goto error; - if ((act_qgrop->queue[0] + - act_qgrop->queue_num) > - dev_data->nb_rx_queues) - goto error1; - for (i = 0; i < act_qgrop->queue_num - 1; i++) - if (act_qgrop->queue[i + 1] != - act_qgrop->queue[i] + 1) - goto error2; - rule_info->sw_act.qgrp_size = - act_qgrop->queue_num; - break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - act_q = action->conf; - if (act_q->index >= dev_data->nb_rx_queues) - goto error; - rule_info->sw_act.fltr_act = - ICE_FWD_TO_Q; - rule_info->sw_act.fwd_id.q_id = - base_queue + act_q->index; - break; - - case RTE_FLOW_ACTION_TYPE_DROP: - rule_info->sw_act.fltr_act = - ICE_DROP_PACKET; - break; - - case RTE_FLOW_ACTION_TYPE_VOID: - break; - - default: - goto error; - } + action_type = action->type; + switch (action_type) { + case RTE_FLOW_ACTION_TYPE_RSS: + act_qgrop = action->conf; + rule_info->sw_act.fltr_act = ICE_FWD_TO_QGRP; + rule_info->sw_act.fwd_id.q_id = base_queue + act_qgrop->queue[0]; + rule_info->sw_act.qgrp_size = act_qgrop->queue_num; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + act_q = action->conf; + rule_info->sw_act.fltr_act = ICE_FWD_TO_Q; + rule_info->sw_act.fwd_id.q_id = base_queue + act_q->index; + break; + case RTE_FLOW_ACTION_TYPE_DROP: + rule_info->sw_act.fltr_act = ICE_DROP_PACKET; + break; + default: + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Invalid action type or queue number"); + return -rte_errno; } rule_info->sw_act.vsi_handle = vsi->idx; @@ -1699,65 +1619,120 @@ ice_switch_parse_action(struct ice_pf *pf, rule_info->priority = ICE_SW_PRI_BASE - priority; return 0; - -error: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action type or queue number"); - return -rte_errno; - -error1: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid queue region indexes"); - return -rte_errno; - -error2: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Discontinuous queue region"); - return -rte_errno; } static int -ice_switch_check_action(const struct rte_flow_action *actions, - struct rte_flow_error *error) +ice_switch_dcf_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) { + struct ice_dcf_adapter *ad = param->driver_ctx; const struct rte_flow_action *action; enum rte_flow_action_type action_type; - uint16_t actions_num = 0; - - for (action = actions; action->type != - RTE_FLOW_ACTION_TYPE_END; action++) { - action_type = action->type; - switch (action_type) { - case RTE_FLOW_ACTION_TYPE_RSS: - case RTE_FLOW_ACTION_TYPE_QUEUE: - case RTE_FLOW_ACTION_TYPE_DROP: - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: - actions_num++; - break; - case RTE_FLOW_ACTION_TYPE_VOID: - continue; - default: - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action type"); - return -rte_errno; + const struct rte_flow_action_ethdev *act_ethdev; + const struct rte_eth_dev *repr_dev; + + action = actions->actions[0]; + action_type = action->type; + + switch (action_type) { + case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR: + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + { + uint16_t expected_port_id, backer_port_id; + act_ethdev = action->conf; + + if (!rte_eth_dev_is_valid_port(act_ethdev->port_id)) + goto invalid_port_id; + + expected_port_id = ad->parent.pf.dev_data->port_id; + + if (action_type == RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR) { + if (expected_port_id != act_ethdev->port_id) + goto invalid_port_id; + } else { + repr_dev = &rte_eth_devices[act_ethdev->port_id]; + + if (!repr_dev->data) + goto invalid_port_id; + + backer_port_id = repr_dev->data->backer_port_id; + + if (backer_port_id != expected_port_id) + goto invalid_port_id; } + + break; +invalid_port_id: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid port ID"); + } + case RTE_FLOW_ACTION_TYPE_DROP: + break; + default: + /* Should never reach */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type"); } - if (actions_num != 1) { - rte_flow_error_set(error, - EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid action number"); - return -rte_errno; + return 0; +} + +static int +ice_switch_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct ice_adapter *ad = param->driver_ctx; + struct ice_pf *pf = &ad->pf; + struct rte_eth_dev_data *dev_data = pf->dev_data; + const struct rte_flow_action *action = actions->actions[0]; + + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_RSS: + { + const struct rte_flow_action_rss *act_qgrop; + act_qgrop = action->conf; + + /* Check bounds on number of queues */ + if (act_qgrop->queue_num < 2 || act_qgrop->queue_num > ICE_SW_MAX_QUEUES) + goto err_rss; + + /* must be power of 2 */ + if (!rte_is_power_of_2(act_qgrop->queue_num)) + goto err_rss; + + /* queues are monotonous and contiguous so check last queue */ + if ((act_qgrop->queue[0] + act_qgrop->queue_num) > dev_data->nb_rx_queues) + goto err_rss; + + break; +err_rss: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid queue region"); + } + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q; + act_q = action->conf; + if (act_q->index >= dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid queue"); + } + + break; + } + case RTE_FLOW_ACTION_TYPE_DROP: + break; + default: + /* Should never reach */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Invalid action type"); } return 0; @@ -1788,11 +1763,38 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, struct ci_flow_attr_check_param attr_param = { .allow_priority = true, }; + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param dcf_param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, + RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_switch_dcf_action_check, + }; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_RSS, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_switch_action_check, + .driver_ctx = ad, + }; ret = ci_flow_check_attr(attr, &attr_param, error); if (ret) return ret; + ret = ci_flow_check_actions(actions, (ad->hw.dcf_enabled) ? &dcf_param : ¶m, + &parsed_actions, error); + if (ret) + goto error; + /* Allow only two priority values - 0 or 1 */ if (attr->priority > 1) { rte_flow_error_set(error, EINVAL, @@ -1870,16 +1872,12 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, memset(&rule_info, 0, sizeof(rule_info)); rule_info.tun_type = tun_type; - ret = ice_switch_check_action(actions, error); - if (ret) - goto error; - if (ad->hw.dcf_enabled) - ret = ice_switch_parse_dcf_action((void *)ad, actions, attr->priority, - error, &rule_info); + ret = ice_switch_parse_dcf_action(parsed_actions.actions[0], + attr->priority, error, &rule_info); else - ret = ice_switch_parse_action(pf, actions, attr->priority, error, - &rule_info); + ret = ice_switch_parse_action(pf, parsed_actions.actions[0], + attr->priority, error, &rule_info); if (ret) goto error; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
* [PATCH v3 29/29] net/ice: use common action checks for ACL 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov ` (27 preceding siblings ...) 2026-04-10 13:22 ` [PATCH v3 28/29] net/ice: use common action checks for switch Anatoly Burakov @ 2026-04-10 13:22 ` Anatoly Burakov 28 siblings, 0 replies; 83+ messages in thread From: Anatoly Burakov @ 2026-04-10 13:22 UTC (permalink / raw) To: dev, Bruce Richardson From: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Use the common flow action checking parsing infrastructure for checking flow actions for ACL filter. Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> --- drivers/net/intel/ice/ice_acl_filter.c | 143 +++++++++++++++---------- 1 file changed, 84 insertions(+), 59 deletions(-) diff --git a/drivers/net/intel/ice/ice_acl_filter.c b/drivers/net/intel/ice/ice_acl_filter.c index 0421578b32..90fd3c2c05 100644 --- a/drivers/net/intel/ice/ice_acl_filter.c +++ b/drivers/net/intel/ice/ice_acl_filter.c @@ -645,60 +645,6 @@ ice_acl_filter_free(struct rte_flow *flow) flow->rule = NULL; } -static int -ice_acl_parse_action(__rte_unused struct ice_adapter *ad, - const struct rte_flow_action actions[], - struct rte_flow_error *error, - struct ice_acl_conf *filter) -{ - struct ice_pf *pf = &ad->pf; - const struct rte_flow_action_queue *act_q; - uint32_t dest_num = 0; - - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { - switch (actions->type) { - case RTE_FLOW_ACTION_TYPE_VOID: - break; - case RTE_FLOW_ACTION_TYPE_DROP: - dest_num++; - - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; - break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - dest_num++; - - act_q = actions->conf; - filter->input.q_index = act_q->index; - if (filter->input.q_index >= - pf->dev_data->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "Invalid queue for FDIR."); - return -rte_errno; - } - filter->input.dest_ctl = - ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; - break; - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Invalid action."); - return -rte_errno; - } - } - - if (dest_num == 0 || dest_num >= 2) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, actions, - "Unsupported action combination"); - return -rte_errno; - } - - return 0; -} - static int ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad, const struct rte_flow_item pattern[], @@ -966,6 +912,69 @@ ice_acl_parse_pattern(__rte_unused struct ice_adapter *ad, return 0; } +static int +ice_acl_parse_action(const struct ci_flow_actions *actions, + struct ice_acl_conf *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_action *act = actions->actions[0]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_DROP: + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DROP_PKT; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + + filter->input.q_index = act_q->index; + filter->input.dest_ctl = + ICE_FLTR_PRGM_DESC_DEST_DIRECT_PKT_QINDEX; + break; + } + default: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + + return 0; +} + +static int +ice_acl_parse_action_check(const struct ci_flow_actions *actions, + const struct ci_flow_actions_check_param *param, + struct rte_flow_error *error) +{ + struct ice_adapter *ad = param->driver_ctx; + struct ice_pf *pf = &ad->pf; + const struct rte_flow_action *act = actions->actions[0]; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_DROP: + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + const struct rte_flow_action_queue *act_q = act->conf; + + if (act_q->index >= pf->dev_data->nb_rx_queues) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid queue for ACL."); + } + break; + } + default: + /* shouldn't happen */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "Invalid action."); + } + + return 0; +} + static int ice_acl_parse(struct ice_adapter *ad, struct ice_pattern_match_item *array, @@ -976,17 +985,33 @@ ice_acl_parse(struct ice_adapter *ad, void **meta, struct rte_flow_error *error) { + struct ci_flow_actions parsed_actions = {0}; + struct ci_flow_actions_check_param param = { + .allowed_types = (enum rte_flow_action_type[]){ + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END + }, + .max_actions = 1, + .check = ice_acl_parse_action_check, + .driver_ctx = ad, + }; struct ice_pf *pf = &ad->pf; struct ice_acl_conf *filter = &pf->acl.conf; struct ice_pattern_match_item *item = NULL; uint64_t input_set; int ret; - ret = ci_flow_check_attr(attr, NULL, error); - if (ret) - return ret; - memset(filter, 0, sizeof(*filter)); + + ret = ci_flow_check_attr(attr, NULL, error); + if (ret) + return ret; + + ret = ci_flow_check_actions(actions, ¶m, &parsed_actions, error); + if (ret) + return ret; + item = ice_search_pattern_match_item(ad, pattern, array, array_len, error); if (!item) @@ -1005,7 +1030,7 @@ ice_acl_parse(struct ice_adapter *ad, goto error; } - ret = ice_acl_parse_action(ad, actions, error, filter); + ret = ice_acl_parse_action(&parsed_actions, filter, error); if (ret) goto error; -- 2.47.3 ^ permalink raw reply related [flat|nested] 83+ messages in thread
end of thread, other threads:[~2026-04-10 13:23 UTC | newest] Thread overview: 83+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-02-11 14:20 [PATCH v1 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 01/25] net/intel/common: add common flow action parsing Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 02/25] net/intel/common: add common flow attr validation Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 03/25] net/ixgbe: use common checks in ethertype filter Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 04/25] net/ixgbe: use common checks in syn filter Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 05/25] net/ixgbe: use common checks in L2 tunnel filter Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 06/25] net/ixgbe: use common checks in ntuple filter Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 07/25] net/ixgbe: use common checks in security filter Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 08/25] net/ixgbe: use common checks in FDIR filters Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 09/25] net/ixgbe: use common checks in RSS filter Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 10/25] net/i40e: use common flow attribute checks Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 11/25] net/i40e: refactor RSS flow parameter checks Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 12/25] net/i40e: use common action checks for ethertype Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 13/25] net/i40e: use common action checks for FDIR Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 14/25] net/i40e: use common action checks for tunnel Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 15/25] net/iavf: use common flow attribute checks Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 16/25] net/iavf: use common action checks for IPsec Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 17/25] net/iavf: use common action checks for hash Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 18/25] net/iavf: use common action checks for FDIR Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 19/25] net/iavf: use common action checks for fsub Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 20/25] net/iavf: use common action checks for flow query Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 21/25] net/ice: use common flow attribute checks Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 22/25] net/ice: use common action checks for hash Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 23/25] net/ice: use common action checks for FDIR Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 24/25] net/ice: use common action checks for switch Anatoly Burakov 2026-02-11 14:20 ` [PATCH v1 25/25] net/ice: use common action checks for ACL Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 01/25] net/intel/common: add common flow action parsing Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 02/25] net/intel/common: add common flow attr validation Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 03/25] net/ixgbe: use common checks in ethertype filter Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 04/25] net/ixgbe: use common checks in syn filter Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 05/25] net/ixgbe: use common checks in L2 tunnel filter Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 06/25] net/ixgbe: use common checks in ntuple filter Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 07/25] net/ixgbe: use common checks in security filter Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 08/25] net/ixgbe: use common checks in FDIR filters Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 09/25] net/ixgbe: use common checks in RSS filter Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 10/25] net/i40e: use common flow attribute checks Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 11/25] net/i40e: refactor RSS flow parameter checks Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 12/25] net/i40e: use common action checks for ethertype Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 13/25] net/i40e: use common action checks for FDIR Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 14/25] net/i40e: use common action checks for tunnel Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 15/25] net/iavf: use common flow attribute checks Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 16/25] net/iavf: use common action checks for IPsec Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 17/25] net/iavf: use common action checks for hash Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 18/25] net/iavf: use common action checks for FDIR Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 19/25] net/iavf: use common action checks for fsub Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 20/25] net/iavf: use common action checks for flow query Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 21/25] net/ice: use common flow attribute checks Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 22/25] net/ice: use common action checks for hash Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 23/25] net/ice: use common action checks for FDIR Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 24/25] net/ice: use common action checks for switch Anatoly Burakov 2026-03-16 10:52 ` [PATCH v2 25/25] net/ice: use common action checks for ACL Anatoly Burakov 2026-03-16 10:59 ` [PATCH v2 00/25] Add common flow attr/action parsing infrastructure to Intel PMD's Bruce Richardson 2026-04-10 13:12 ` [PATCH v3 00/29] " Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 01/29] net/ixgbe: fix shared PF pointer in representor Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 02/29] net/ixgbe: store max VFs in adapter Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 03/29] net/ixgbe: reduce FDIR conf macro usage Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 04/29] net/ixgbe: use adapter in flow-related calls Anatoly Burakov 2026-04-10 13:12 ` [PATCH v3 05/29] net/intel/common: add common flow action parsing Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 06/29] net/intel/common: add common flow attr validation Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 07/29] net/ixgbe: use common checks in ethertype filter Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 08/29] net/ixgbe: use common checks in syn filter Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 09/29] net/ixgbe: use common checks in L2 tunnel filter Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 10/29] net/ixgbe: use common checks in ntuple filter Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 11/29] net/ixgbe: use common checks in security filter Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 12/29] net/ixgbe: use common checks in FDIR filters Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 13/29] net/ixgbe: use common checks in RSS filter Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 14/29] net/i40e: use common flow attribute checks Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 15/29] net/i40e: refactor RSS flow parameter checks Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 16/29] net/i40e: use common action checks for ethertype Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 17/29] net/i40e: use common action checks for FDIR Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 18/29] net/i40e: use common action checks for tunnel Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 19/29] net/iavf: use common flow attribute checks Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 20/29] net/iavf: use common action checks for IPsec Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 21/29] net/iavf: use common action checks for hash Anatoly Burakov 2026-04-10 13:13 ` [PATCH v3 22/29] net/iavf: use common action checks for FDIR Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 23/29] net/iavf: use common action checks for fsub Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 24/29] net/iavf: use common action checks for flow query Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 25/29] net/ice: use common flow attribute checks Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 26/29] net/ice: use common action checks for hash Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 27/29] net/ice: use common action checks for FDIR Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 28/29] net/ice: use common action checks for switch Anatoly Burakov 2026-04-10 13:22 ` [PATCH v3 29/29] net/ice: use common action checks for ACL Anatoly Burakov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox