public inbox for dev@dpdk.org
 help / color / mirror / Atom feed
* [PATCH 0/9] net/mlx5: lazily allocate HWS actions
@ 2026-02-25 11:59 Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 1/9] net/mlx5: use DPDK be64 type in modify header pattern Dariusz Sosnowski
                   ` (9 more replies)
  0 siblings, 10 replies; 11+ messages in thread
From: Dariusz Sosnowski @ 2026-02-25 11:59 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad
  Cc: dev, Raslan Darawsheh

In mlx5 PMD, HWS flow engine is configured either:

- implicitly on port start or
- explicitly on rte_flow_configure().

As part of flow engine configuration, PMD allocates a set of global
HWS action objects. These are used to implement flow actions such as:

- DROP
- MARK and FLAG (HWS tag action)
- OF_PUSH_VLAN
- OF_POP_VLAN
- SEND_TO_KERNEL
- NAT64
- PORT_REPRESENTOR (HWS default miss action).

These actions can be allocated once, for each flow domain,
and parametrized on runtime.
Allocation of these actions requires allocation of STCs,
HW objects used to set up HW actions.
These STCs are allocated in bulk to reduce amount of syscalls needed.

In case of global actions listed above, these STCs were allocated always
even if none of the related flow actions were used by the application.
This caused unnecessary system memory usage which could reach 4 MB per port.
On systems with multiple VFs/SFs, memory usage could reach couple of GBs.

This patchset addresses that by introducing lazy allocation of these actions:

- Patch 1 - Redefines mlx5dr_action_mh_pattern to use rte_be64_t instead of __be64.
  This is to prevent compilation issues when mlx5dr.h is included in new files.
- Patch 2 - Add helpers for translating HWS flow table type to HWS action flags
  used on allocation to simplify logic added in follow up patches.
- Patch 3 - Introduces dedicated internal interface for lazily allocating
  HWS actions. Drop action is handled first.
- Patch 4-9 - Each patch adjusts one action type to use lazy allocation.

Dariusz Sosnowski (9):
  net/mlx5: use DPDK be64 type in modify header pattern
  net/mlx5/hws: add table type to action flags conversion
  net/mlx5: lazily allocate drop HWS action
  net/mlx5: lazily allocate tag HWS action
  net/mlx5: lazily allocate HWS pop VLAN action
  net/mlx5: lazily allocate HWS push VLAN action
  net/mlx5: lazily allocate HWS send to kernel action
  net/mlx5: lazily allocate HWS NAT64 action
  net/mlx5: lazily allocate HWS default miss action

 drivers/net/mlx5/hws/mlx5dr.h              |  19 +-
 drivers/net/mlx5/hws/mlx5dr_action.c       |  26 +-
 drivers/net/mlx5/hws/mlx5dr_pat_arg.c      |  20 +-
 drivers/net/mlx5/hws/mlx5dr_pat_arg.h      |   8 +-
 drivers/net/mlx5/hws/mlx5dr_table.c        |  75 +++
 drivers/net/mlx5/meson.build               |   1 +
 drivers/net/mlx5/mlx5.h                    |  21 +-
 drivers/net/mlx5/mlx5_flow.h               |   1 +
 drivers/net/mlx5/mlx5_flow_dv.c            |   2 +-
 drivers/net/mlx5/mlx5_flow_hw.c            | 557 ++++++---------------
 drivers/net/mlx5/mlx5_hws_global_actions.c | 273 ++++++++++
 drivers/net/mlx5/mlx5_hws_global_actions.h |  71 +++
 12 files changed, 611 insertions(+), 463 deletions(-)
 create mode 100644 drivers/net/mlx5/mlx5_hws_global_actions.c
 create mode 100644 drivers/net/mlx5/mlx5_hws_global_actions.h

--
2.47.3


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/9] net/mlx5: use DPDK be64 type in modify header pattern
  2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
@ 2026-02-25 11:59 ` Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 2/9] net/mlx5/hws: add table type to action flags conversion Dariusz Sosnowski
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dariusz Sosnowski @ 2026-02-25 11:59 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad
  Cc: dev, Raslan Darawsheh

mlx5dr.h defines mlx5dr_action_mh_pattern struct.
One of its fields is an array of modify field HW actions,
each of 64 bit size and stored as big endian.
Before this patch, this array was defined using __be64
typedef from "linux/types.h" header file.

This necessitated either inclusion of that header file
or, in case of Windows build, addition of typedef directly
whenever mlx5dr.h was included.
Follow up patches which add lazy allocation of global
HWS actions will add additional source files which would include
mlx5dr.h. To prevent additional includes/typedef definitions
required for __be64, this patch changes this definition to use
DPDK's typedef i.e., rte_be64_t.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h         |  3 ++-
 drivers/net/mlx5/hws/mlx5dr_action.c  | 26 +++++++++++++-------------
 drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 20 ++++++++++----------
 drivers/net/mlx5/hws/mlx5dr_pat_arg.h |  8 ++++----
 drivers/net/mlx5/mlx5.h               |  2 --
 drivers/net/mlx5/mlx5_flow_dv.c       |  2 +-
 drivers/net/mlx5/mlx5_flow_hw.c       |  4 ++--
 7 files changed, 32 insertions(+), 33 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index c13316305f..d358178e5b 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -5,6 +5,7 @@
 #ifndef MLX5DR_H_
 #define MLX5DR_H_
 
+#include <rte_byteorder.h>
 #include <rte_flow.h>
 
 struct mlx5dr_context;
@@ -248,7 +249,7 @@ struct mlx5dr_action_mh_pattern {
 	/* Byte size of modify actions provided by "data" */
 	size_t sz;
 	/* PRM format modify actions pattern */
-	__be64 *data;
+	rte_be64_t *data;
 };
 
 /* In actions that take offset, the offset is unique, pointing to a single
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index b35bf07c3c..439592c5e4 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -369,7 +369,7 @@ mlx5dr_action_create_nat64_copy_state(struct mlx5dr_context *ctx,
 				      struct mlx5dr_action_nat64_attr *attr,
 				      uint32_t flags)
 {
-	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	rte_be64_t modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
 	struct mlx5dr_action_mh_pattern pat[2];
 	struct mlx5dr_action *action;
 	uint32_t packet_len_field;
@@ -484,7 +484,7 @@ mlx5dr_action_create_nat64_repalce_state(struct mlx5dr_context *ctx,
 					 uint32_t flags)
 {
 	uint32_t address_prefix[MLX5DR_ACTION_NAT64_HEADER_MINUS_ONE] = {0};
-	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	rte_be64_t modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
 	struct mlx5dr_action_mh_pattern pat[2];
 	static struct mlx5dr_action *action;
 	uint8_t header_size_in_dw;
@@ -575,7 +575,7 @@ mlx5dr_action_create_nat64_copy_proto_state(struct mlx5dr_context *ctx,
 					    struct mlx5dr_action_nat64_attr *attr,
 					    uint32_t flags)
 {
-	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	rte_be64_t modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
 	struct mlx5dr_action_mh_pattern pat[2];
 	struct mlx5dr_action *action;
 	uint8_t *action_ptr;
@@ -615,7 +615,7 @@ mlx5dr_action_create_nat64_copy_back_state(struct mlx5dr_context *ctx,
 					   struct mlx5dr_action_nat64_attr *attr,
 					   uint32_t flags)
 {
-	__be64 modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
+	rte_be64_t modify_action_data[MLX5DR_ACTION_NAT64_MAX_MODIFY_ACTIONS];
 	struct mlx5dr_action_mh_pattern pat[2];
 	struct mlx5dr_action *action;
 	uint32_t packet_len_field;
@@ -2165,7 +2165,7 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_action *action,
 
 	/* All DecapL3 cases require the same max arg size */
 	arg_obj = mlx5dr_arg_create_modify_header_arg(ctx,
-						      (__be64 *)mh_data,
+						      (rte_be64_t *)mh_data,
 						      num_of_actions,
 						      log_bulk_sz,
 						      action->flags & MLX5DR_ACTION_FLAG_SHARED);
@@ -2177,7 +2177,7 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_action *action,
 		mlx5dr_action_prepare_decap_l3_actions(hdrs[i].sz, mh_data, &num_of_actions);
 		mh_data_size = num_of_actions * MLX5DR_MODIFY_ACTION_SIZE;
 
-		pat_obj = mlx5dr_pat_get_pattern(ctx, (__be64 *)mh_data, mh_data_size);
+		pat_obj = mlx5dr_pat_get_pattern(ctx, (rte_be64_t *)mh_data, mh_data_size);
 		if (!pat_obj) {
 			DR_LOG(ERR, "Failed to allocate pattern for DecapL3");
 			goto free_stc_and_pat;
@@ -2189,7 +2189,7 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_action *action,
 		action[i].modify_header.arg_obj = arg_obj;
 		action[i].modify_header.pat_obj = pat_obj;
 		action[i].modify_header.require_reparse =
-			mlx5dr_pat_require_reparse((__be64 *)mh_data, num_of_actions);
+			mlx5dr_pat_require_reparse((rte_be64_t *)mh_data, num_of_actions);
 
 		ret = mlx5dr_action_create_stcs(&action[i], NULL);
 		if (ret) {
@@ -2304,7 +2304,7 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx,
 static int
 mlx5dr_action_create_modify_header_root(struct mlx5dr_action *action,
 					size_t actions_sz,
-					__be64 *actions)
+					rte_be64_t *actions)
 {
 	enum mlx5dv_flow_table_type ft_type = 0;
 	struct ibv_context *local_ibv_ctx;
@@ -2897,7 +2897,7 @@ static void *
 mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action)
 {
 	struct mlx5dr_action_mh_pattern pattern;
-	__be64 cmd[3] = {0};
+	rte_be64_t cmd[3] = {0};
 	uint16_t mod_id;
 
 	mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0);
@@ -2943,7 +2943,7 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action)
 		MLX5_MODI_OUT_DIPV6_31_0
 	};
 	struct mlx5dr_action_mh_pattern pattern;
-	__be64 cmd[5] = {0};
+	rte_be64_t cmd[5] = {0};
 	uint16_t mod_id;
 	uint32_t i;
 
@@ -3001,7 +3001,7 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action)
 	MLX5_SET(copy_action_in, cmd, src_field, mod_id);
 	MLX5_SET(copy_action_in, cmd, dst_field, MLX5_MODI_OUT_IP_PROTOCOL);
 
-	pattern.data = (__be64 *)cmd;
+	pattern.data = (rte_be64_t *)cmd;
 	pattern.sz = sizeof(cmd);
 
 	return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0,
@@ -3063,7 +3063,7 @@ mlx5dr_action_create_push_ipv6_route_ext_mhdr1(struct mlx5dr_action *action)
 	MLX5_SET(set_action_in, cmd, field, MLX5_MODI_OUT_IP_PROTOCOL);
 	MLX5_SET(set_action_in, cmd, data, IPPROTO_ROUTING);
 
-	pattern.data = (__be64 *)cmd;
+	pattern.data = (rte_be64_t *)cmd;
 	pattern.sz = sizeof(cmd);
 
 	return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, 0,
@@ -3084,7 +3084,7 @@ mlx5dr_action_create_push_ipv6_route_ext_mhdr2(struct mlx5dr_action *action,
 	struct mlx5dr_action_mh_pattern pattern;
 	uint32_t *ipv6_dst_addr = NULL;
 	uint8_t seg_left, next_hdr;
-	__be64 cmd[5] = {0};
+	rte_be64_t cmd[5] = {0};
 	uint16_t mod_id;
 	uint32_t i;
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c
index 513549ff3c..dfed5eb47a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c
+++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c
@@ -37,7 +37,7 @@ uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions)
 	return BIT(mlx5dr_arg_get_arg_log_size(num_of_actions));
 }
 
-bool mlx5dr_pat_require_reparse(__be64 *actions, uint16_t num_of_actions)
+bool mlx5dr_pat_require_reparse(rte_be64_t *actions, uint16_t num_of_actions)
 {
 	uint16_t i, field;
 	uint8_t action_id;
@@ -98,9 +98,9 @@ void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache)
 }
 
 static bool mlx5dr_pat_compare_pattern(int cur_num_of_actions,
-				       __be64 cur_actions[],
+				       rte_be64_t cur_actions[],
 				       int num_of_actions,
-				       __be64 actions[])
+				       rte_be64_t actions[])
 {
 	int i;
 
@@ -129,13 +129,13 @@ static bool mlx5dr_pat_compare_pattern(int cur_num_of_actions,
 static struct mlx5dr_pattern_cache_item *
 mlx5dr_pat_find_cached_pattern(struct mlx5dr_pattern_cache *cache,
 			       uint16_t num_of_actions,
-			       __be64 *actions)
+			       rte_be64_t *actions)
 {
 	struct mlx5dr_pattern_cache_item *cached_pat;
 
 	LIST_FOREACH(cached_pat, &cache->head, next) {
 		if (mlx5dr_pat_compare_pattern(cached_pat->mh_data.num_of_actions,
-					       (__be64 *)cached_pat->mh_data.data,
+					       (rte_be64_t *)cached_pat->mh_data.data,
 					       num_of_actions,
 					       actions))
 			return cached_pat;
@@ -147,7 +147,7 @@ mlx5dr_pat_find_cached_pattern(struct mlx5dr_pattern_cache *cache,
 static struct mlx5dr_pattern_cache_item *
 mlx5dr_pat_get_existing_cached_pattern(struct mlx5dr_pattern_cache *cache,
 				       uint16_t num_of_actions,
-				       __be64 *actions)
+				       rte_be64_t *actions)
 {
 	struct mlx5dr_pattern_cache_item *cached_pattern;
 
@@ -166,7 +166,7 @@ static struct mlx5dr_pattern_cache_item *
 mlx5dr_pat_add_pattern_to_cache(struct mlx5dr_pattern_cache *cache,
 				struct mlx5dr_devx_obj *pattern_obj,
 				uint16_t num_of_actions,
-				__be64 *actions)
+				rte_be64_t *actions)
 {
 	struct mlx5dr_pattern_cache_item *cached_pattern;
 
@@ -248,7 +248,7 @@ void mlx5dr_pat_put_pattern(struct mlx5dr_context *ctx,
 
 struct mlx5dr_devx_obj *
 mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx,
-		       __be64 *pattern, size_t pattern_sz)
+		       rte_be64_t *pattern, size_t pattern_sz)
 {
 	uint16_t num_of_actions = pattern_sz / MLX5DR_MODIFY_ACTION_SIZE;
 	struct mlx5dr_pattern_cache_item *cached_pattern;
@@ -458,7 +458,7 @@ mlx5dr_arg_create(struct mlx5dr_context *ctx,
 
 struct mlx5dr_devx_obj *
 mlx5dr_arg_create_modify_header_arg(struct mlx5dr_context *ctx,
-				    __be64 *data,
+				    rte_be64_t *data,
 				    uint8_t num_of_actions,
 				    uint32_t log_bulk_sz,
 				    bool write_data)
@@ -477,7 +477,7 @@ mlx5dr_arg_create_modify_header_arg(struct mlx5dr_context *ctx,
 	return arg_obj;
 }
 
-bool mlx5dr_pat_verify_actions(__be64 pattern[], size_t sz)
+bool mlx5dr_pat_verify_actions(rte_be64_t pattern[], size_t sz)
 {
 	size_t i;
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h
index c4e0cbc843..3418469962 100644
--- a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h
+++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h
@@ -51,7 +51,7 @@ int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache);
 
 void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache);
 
-bool mlx5dr_pat_verify_actions(__be64 pattern[], size_t sz);
+bool mlx5dr_pat_verify_actions(rte_be64_t pattern[], size_t sz);
 
 struct mlx5dr_devx_obj *
 mlx5dr_arg_create(struct mlx5dr_context *ctx,
@@ -62,14 +62,14 @@ mlx5dr_arg_create(struct mlx5dr_context *ctx,
 
 struct mlx5dr_devx_obj *
 mlx5dr_arg_create_modify_header_arg(struct mlx5dr_context *ctx,
-				    __be64 *data,
+				    rte_be64_t *data,
 				    uint8_t num_of_actions,
 				    uint32_t log_bulk_sz,
 				    bool write_data);
 
 struct mlx5dr_devx_obj *
 mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx,
-		       __be64 *pattern,
+		       rte_be64_t *pattern,
 		       size_t pattern_sz);
 
 void mlx5dr_pat_put_pattern(struct mlx5dr_context *ctx,
@@ -78,7 +78,7 @@ void mlx5dr_pat_put_pattern(struct mlx5dr_context *ctx,
 bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx,
 					  uint32_t arg_size);
 
-bool mlx5dr_pat_require_reparse(__be64 *actions, uint16_t num_of_actions);
+bool mlx5dr_pat_require_reparse(rte_be64_t *actions, uint16_t num_of_actions);
 
 void mlx5dr_arg_write(struct mlx5dr_send_engine *queue,
 		      void *comp_data,
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b83dda5652..e9d855e345 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -37,8 +37,6 @@
 #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
 #ifndef RTE_EXEC_ENV_WINDOWS
 #define HAVE_MLX5_HWS_SUPPORT 1
-#else
-#define __be64 uint64_t
 #endif
 #include "hws/mlx5dr.h"
 #endif
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 99ab0125a8..d1bed18077 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6294,7 +6294,7 @@ mlx5_flow_modify_create_cb(void *tool_ctx, void *cb_ctx)
 #ifdef HAVE_MLX5_HWS_SUPPORT
 		struct mlx5dr_action_mh_pattern pattern = {
 			.sz = data_len,
-			.data = (__be64 *)ref->actions
+			.data = (rte_be64_t *)ref->actions
 		};
 		entry->action = mlx5dr_action_create_modify_header(ctx->data2,
 			1,
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 6ac6825c16..b29909f99d 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2436,7 +2436,7 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev,
 					  NULL, "translate modify_header: no memory for modify header context");
 	rte_memcpy(acts->mhdr, mhdr, sizeof(*mhdr));
 	if (!mhdr->shared) {
-		pattern.data = (__be64 *)acts->mhdr->mhdr_cmds;
+		pattern.data = (rte_be64_t *)acts->mhdr->mhdr_cmds;
 		typeof(mp_ctx->mh) *mh = &mp_ctx->mh;
 		uint32_t idx = mh->elements_num;
 		mh->pattern[mh->elements_num++] = pattern;
@@ -2470,7 +2470,7 @@ mlx5_tbl_ensure_shared_modify_header(struct rte_eth_dev *dev,
 	uint16_t mhdr_ix = acts->mhdr->pos;
 	uint32_t flags = mlx5_hw_act_flag[!!attr->group][tbl_type] | MLX5DR_ACTION_FLAG_SHARED;
 
-	pattern.data = (__be64 *)acts->mhdr->mhdr_cmds;
+	pattern.data = (rte_be64_t *)acts->mhdr->mhdr_cmds;
 	acts->mhdr->action = mlx5dr_action_create_modify_header(priv->dr_ctx, 1,
 								&pattern, 0, flags);
 	if (!acts->mhdr->action)
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/9] net/mlx5/hws: add table type to action flags conversion
  2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 1/9] net/mlx5: use DPDK be64 type in modify header pattern Dariusz Sosnowski
@ 2026-02-25 11:59 ` Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 3/9] net/mlx5: lazily allocate drop HWS action Dariusz Sosnowski
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dariusz Sosnowski @ 2026-02-25 11:59 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad
  Cc: dev, Raslan Darawsheh

Add function for converting HWS table type to corresponding
HWS action flag for both root and non-root tables.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h       | 16 ++++++
 drivers/net/mlx5/hws/mlx5dr_table.c | 75 +++++++++++++++++++++++++++++
 2 files changed, 91 insertions(+)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index d358178e5b..078740a530 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -369,6 +369,22 @@ mlx5dr_context_open(struct ibv_context *ibv_ctx,
  */
 int mlx5dr_context_close(struct mlx5dr_context *ctx);
 
+/**
+ * Convert given table type to corresponding action type.
+ *
+ * @param[in] table_type
+ *	Table type.
+ * @param[in] is_root
+ *	Whether table should be considered root or not.
+ * @param[out] action_flags
+ *	Corresponding action flags will be written here.
+ * @return
+ *	0 on success. Negative errno and rte_errno is set otherwise.
+ */
+int mlx5dr_table_type_to_action_flags(const enum mlx5dr_table_type table_type,
+				      const bool is_root,
+				      enum mlx5dr_action_flags *action_flags);
+
 /* Create a new direct rule table. Each table can contain multiple matchers.
  *
  * @param[in] ctx
diff --git a/drivers/net/mlx5/hws/mlx5dr_table.c b/drivers/net/mlx5/hws/mlx5dr_table.c
index c1c60b4e52..41ffaa19e3 100644
--- a/drivers/net/mlx5/hws/mlx5dr_table.c
+++ b/drivers/net/mlx5/hws/mlx5dr_table.c
@@ -4,6 +4,81 @@
 
 #include "mlx5dr_internal.h"
 
+static int
+table_type_to_root_action_flags(enum mlx5dr_table_type table_type,
+				enum mlx5dr_action_flags *out)
+{
+	switch (table_type) {
+	case MLX5DR_TABLE_TYPE_NIC_RX:
+		*out = MLX5DR_ACTION_FLAG_ROOT_RX;
+		break;
+	case MLX5DR_TABLE_TYPE_NIC_TX:
+		*out = MLX5DR_ACTION_FLAG_ROOT_TX;
+		break;
+	case MLX5DR_TABLE_TYPE_FDB:
+		*out = MLX5DR_ACTION_FLAG_ROOT_FDB;
+		break;
+	default:
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+table_type_to_nonroot_action_flags(enum mlx5dr_table_type table_type,
+				   enum mlx5dr_action_flags *out)
+{
+	switch (table_type) {
+	case MLX5DR_TABLE_TYPE_NIC_RX:
+		*out = MLX5DR_ACTION_FLAG_HWS_RX;
+		break;
+	case MLX5DR_TABLE_TYPE_NIC_TX:
+		*out = MLX5DR_ACTION_FLAG_HWS_TX;
+		break;
+	case MLX5DR_TABLE_TYPE_FDB:
+		*out = MLX5DR_ACTION_FLAG_HWS_FDB;
+		break;
+	case MLX5DR_TABLE_TYPE_FDB_RX:
+		*out = MLX5DR_ACTION_FLAG_HWS_FDB_RX;
+		break;
+	case MLX5DR_TABLE_TYPE_FDB_TX:
+		*out = MLX5DR_ACTION_FLAG_HWS_FDB_TX;
+		break;
+	case MLX5DR_TABLE_TYPE_FDB_UNIFIED:
+		*out = MLX5DR_ACTION_FLAG_HWS_FDB_UNIFIED;
+		break;
+	default:
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+int
+mlx5dr_table_type_to_action_flags(const enum mlx5dr_table_type table_type,
+				  const bool is_root,
+				  enum mlx5dr_action_flags *action_flags)
+{
+	int ret = 0;
+
+	if (is_root) {
+		ret = table_type_to_root_action_flags(table_type, action_flags);
+		if (ret < 0)
+			DR_LOG(ERR, "Cannot convert table type %d to action flags for root table",
+			       table_type);
+	} else {
+		ret = table_type_to_nonroot_action_flags(table_type, action_flags);
+		if (ret < 0)
+			DR_LOG(ERR, "Cannot convert table type %d to action flags for HWS table",
+			       table_type);
+	}
+
+	return ret;
+}
+
 static void mlx5dr_table_init_next_ft_attr(struct mlx5dr_table *tbl,
 					   struct mlx5dr_cmd_ft_create_attr *ft_attr)
 {
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/9] net/mlx5: lazily allocate drop HWS action
  2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 1/9] net/mlx5: use DPDK be64 type in modify header pattern Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 2/9] net/mlx5/hws: add table type to action flags conversion Dariusz Sosnowski
@ 2026-02-25 11:59 ` Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 4/9] net/mlx5: lazily allocate tag " Dariusz Sosnowski
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dariusz Sosnowski @ 2026-02-25 11:59 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad
  Cc: dev, Raslan Darawsheh

As of now, HWS drop action used for DROP flow action,
is allocated either on port start or on rte_flow_configure().
This can cause unnecessary FW resource usage
if user does not use any DROP actions.

This patch adds dedicated internal API - mlx5_hws_global_actions -
in mlx5 PMD for lazily allocating HWS drop action, to delay FW resource
allocation until needed.
Instead of allocating HWS drop action supporting all possible domains
(NIC Rx, NIC Tx, FDB if relevant) as it was previously,
separate action will be allocated for each domain as needed
to further minimize FW resource usage.

Follow up commits will extend this API with additional actions
which are pre-allocated on port start or on rte_flow_configure().

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/meson.build               |  1 +
 drivers/net/mlx5/mlx5.h                    |  5 +-
 drivers/net/mlx5/mlx5_flow_hw.c            | 27 ++++-----
 drivers/net/mlx5/mlx5_hws_global_actions.c | 68 ++++++++++++++++++++++
 drivers/net/mlx5/mlx5_hws_global_actions.h | 39 +++++++++++++
 5 files changed, 123 insertions(+), 17 deletions(-)
 create mode 100644 drivers/net/mlx5/mlx5_hws_global_actions.c
 create mode 100644 drivers/net/mlx5/mlx5_hws_global_actions.h

diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index 28275bed21..82a7dfe782 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -53,6 +53,7 @@ if is_linux
             'mlx5_flow_quota.c',
             'mlx5_flow_verbs.c',
             'mlx5_hws_cnt.c',
+            'mlx5_hws_global_actions.c',
             'mlx5_nta_split.c',
             'mlx5_nta_sample.c',
     )
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e9d855e345..54683cce7a 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -39,6 +39,7 @@
 #define HAVE_MLX5_HWS_SUPPORT 1
 #endif
 #include "hws/mlx5dr.h"
+#include "mlx5_hws_global_actions.h"
 #endif
 
 #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh)
@@ -2112,8 +2113,8 @@ struct mlx5_priv {
 	struct mlx5dr_action *hw_push_vlan[MLX5DR_TABLE_TYPE_MAX];
 	struct mlx5dr_action *hw_pop_vlan[MLX5DR_TABLE_TYPE_MAX];
 	struct mlx5dr_action **hw_vport;
-	/* HW steering global drop action. */
-	struct mlx5dr_action *hw_drop[2];
+	/* HWS global actions. */
+	struct mlx5_hws_global_actions hw_global_actions;
 	/* HW steering global tag action. */
 	struct mlx5dr_action *hw_tag[2];
 	/* HW steering global default miss action. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index b29909f99d..80e156f26a 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2634,6 +2634,7 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 	int ret, err;
 	bool is_root = mlx5_group_id_is_root(cfg->attr.flow_attr.group);
 	bool unified_fdb = is_unified_fdb(priv);
+	struct mlx5dr_action *dr_action = NULL;
 
 	flow_hw_modify_field_init(&mhdr, at);
 	type = get_mlx5dr_table_type(attr, cfg->attr.specialize, unified_fdb);
@@ -2672,8 +2673,16 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 		case RTE_FLOW_ACTION_TYPE_VOID:
 			break;
 		case RTE_FLOW_ACTION_TYPE_DROP:
-			acts->rule_acts[dr_pos].action =
-				priv->hw_drop[!!attr->group];
+			dr_action = mlx5_hws_global_action_drop_get(priv, type, is_root);
+			if (dr_action == NULL) {
+				DRV_LOG(ERR, "port %u failed to allocate drop action",
+					priv->dev_data->port_id);
+				rte_flow_error_set(&sub_error, ENOMEM,
+						   RTE_FLOW_ERROR_TYPE_STATE, NULL,
+						   "failed to allocate drop action");
+				goto err;
+			}
+			acts->rule_acts[dr_pos].action = dr_action;
 			break;
 		case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR:
 			if (is_root) {
@@ -11965,8 +11974,6 @@ __mlx5_flow_hw_resource_release(struct rte_eth_dev *dev, bool ctx_close)
 		at = temp_at;
 	}
 	for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
-		if (priv->hw_drop[i])
-			mlx5dr_action_destroy(priv->hw_drop[i]);
 		if (priv->hw_tag[i])
 			mlx5dr_action_destroy(priv->hw_tag[i]);
 	}
@@ -11995,6 +12002,7 @@ __mlx5_flow_hw_resource_release(struct rte_eth_dev *dev, bool ctx_close)
 		priv->ct_mng = NULL;
 	}
 	mlx5_flow_quota_destroy(dev);
+	mlx5_hws_global_actions_cleanup(priv);
 	if (priv->hw_q) {
 		for (i = 0; i < priv->nb_queue; i++) {
 			struct mlx5_hw_q *hwq = &priv->hw_q[i];
@@ -12345,29 +12353,18 @@ __flow_hw_configure(struct rte_eth_dev *dev,
 			goto err;
 	/* Add global actions. */
 	for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
-		uint32_t act_flags = 0;
 		uint32_t tag_flags = mlx5_hw_act_flag[i][0];
 		bool tag_fdb_rx = !!priv->sh->cdev->config.hca_attr.fdb_rx_set_flow_tag_stc;
 
-		act_flags = mlx5_hw_act_flag[i][MLX5DR_TABLE_TYPE_NIC_RX] |
-			    mlx5_hw_act_flag[i][MLX5DR_TABLE_TYPE_NIC_TX];
 		if (is_proxy) {
 			if (unified_fdb) {
-				act_flags |=
-					(mlx5_hw_act_flag[i][MLX5DR_TABLE_TYPE_FDB_RX] |
-					 mlx5_hw_act_flag[i][MLX5DR_TABLE_TYPE_FDB_TX] |
-					 mlx5_hw_act_flag[i][MLX5DR_TABLE_TYPE_FDB_UNIFIED]);
 				if (i == MLX5_HW_ACTION_FLAG_NONE_ROOT && tag_fdb_rx)
 					tag_flags |= mlx5_hw_act_flag[i][MLX5DR_TABLE_TYPE_FDB_RX];
 			} else {
-				act_flags |= mlx5_hw_act_flag[i][MLX5DR_TABLE_TYPE_FDB];
 				if (i == MLX5_HW_ACTION_FLAG_NONE_ROOT && tag_fdb_rx)
 					tag_flags |= mlx5_hw_act_flag[i][MLX5DR_TABLE_TYPE_FDB];
 			}
 		}
-		priv->hw_drop[i] = mlx5dr_action_create_dest_drop(priv->dr_ctx, act_flags);
-		if (!priv->hw_drop[i])
-			goto err;
 		priv->hw_tag[i] = mlx5dr_action_create_tag
 			(priv->dr_ctx, tag_flags);
 		if (!priv->hw_tag[i])
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.c b/drivers/net/mlx5/mlx5_hws_global_actions.c
new file mode 100644
index 0000000000..6af5497123
--- /dev/null
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2026 NVIDIA Corporation & Affiliates
+ */
+
+#include "mlx5_hws_global_actions.h"
+
+#include "mlx5.h"
+
+void
+mlx5_hws_global_actions_init(struct mlx5_priv *priv)
+{
+	rte_spinlock_init(&priv->hw_global_actions.lock);
+}
+
+void
+mlx5_hws_global_actions_cleanup(struct mlx5_priv *priv)
+{
+	rte_spinlock_lock(&priv->hw_global_actions.lock);
+
+	for (int i = 0; i < MLX5_HWS_GLOBAL_ACTION_MAX; ++i) {
+		for (int j = 0; j < MLX5DR_TABLE_TYPE_MAX; ++j) {
+			int ret;
+
+			if (priv->hw_global_actions.drop.arr[i][j] == NULL)
+				continue;
+
+			ret = mlx5dr_action_destroy(priv->hw_global_actions.drop.arr[i][j]);
+			if (ret != 0)
+				DRV_LOG(ERR, "port %u failed to free HWS action",
+					priv->dev_data->port_id);
+			priv->hw_global_actions.drop.arr[i][j] = NULL;
+		}
+	}
+
+	rte_spinlock_unlock(&priv->hw_global_actions.lock);
+}
+
+struct mlx5dr_action *
+mlx5_hws_global_action_drop_get(struct mlx5_priv *priv,
+				enum mlx5dr_table_type table_type,
+				bool is_root)
+{
+	enum mlx5dr_action_flags action_flags;
+	struct mlx5dr_action *action = NULL;
+	int ret;
+
+	ret = mlx5dr_table_type_to_action_flags(table_type, is_root, &action_flags);
+	if (ret < 0)
+		return NULL;
+
+	rte_spinlock_lock(&priv->hw_global_actions.lock);
+
+	action = priv->hw_global_actions.drop.arr[!is_root][table_type];
+	if (action != NULL)
+		goto unlock_ret;
+
+	action = mlx5dr_action_create_dest_drop(priv->dr_ctx, action_flags);
+	if (action == NULL) {
+		DRV_LOG(ERR, "port %u failed to create drop HWS action", priv->dev_data->port_id);
+		goto unlock_ret;
+	}
+
+	priv->hw_global_actions.drop.arr[!is_root][table_type] = action;
+
+unlock_ret:
+	rte_spinlock_unlock(&priv->hw_global_actions.lock);
+	return action;
+}
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.h b/drivers/net/mlx5/mlx5_hws_global_actions.h
new file mode 100644
index 0000000000..3921004102
--- /dev/null
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2026 NVIDIA Corporation & Affiliates
+ */
+
+#ifndef RTE_PMD_MLX5_HWS_GLOBAL_ACTIONS_H_
+#define RTE_PMD_MLX5_HWS_GLOBAL_ACTIONS_H_
+
+#include <stdint.h>
+
+#include <rte_spinlock.h>
+
+#include "hws/mlx5dr.h"
+
+struct mlx5_priv;
+
+enum mlx5_hws_global_action_index {
+	MLX5_HWS_GLOBAL_ACTION_ROOT,
+	MLX5_HWS_GLOBAL_ACTION_NON_ROOT,
+	MLX5_HWS_GLOBAL_ACTION_MAX,
+};
+
+struct mlx5_hws_global_actions_array {
+	struct mlx5dr_action *arr[MLX5_HWS_GLOBAL_ACTION_MAX][MLX5DR_TABLE_TYPE_MAX];
+};
+
+struct mlx5_hws_global_actions {
+	struct mlx5_hws_global_actions_array drop;
+	rte_spinlock_t lock;
+};
+
+void mlx5_hws_global_actions_init(struct mlx5_priv *priv);
+
+void mlx5_hws_global_actions_cleanup(struct mlx5_priv *priv);
+
+struct mlx5dr_action *mlx5_hws_global_action_drop_get(struct mlx5_priv *priv,
+						      enum mlx5dr_table_type table_type,
+						      bool is_root);
+
+#endif /* !RTE_PMD_MLX5_HWS_GLOBAL_ACTIONS_H_ */
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/9] net/mlx5: lazily allocate tag HWS action
  2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
                   ` (2 preceding siblings ...)
  2026-02-25 11:59 ` [PATCH 3/9] net/mlx5: lazily allocate drop HWS action Dariusz Sosnowski
@ 2026-02-25 11:59 ` Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 5/9] net/mlx5: lazily allocate HWS pop VLAN action Dariusz Sosnowski
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dariusz Sosnowski @ 2026-02-25 11:59 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad
  Cc: dev, Raslan Darawsheh

HWS tag action is used to implement FLAG and MARK rte_flow actions.
It was allocated either on port start or on rte_flow_configure().
This could cause unnecessary FW resource usage
if user did not use any FLAG/MARK actions.

This patch extends global actions internal API,
introduced in previous commit, to allow lazy allocation
of HWS tag action. It will be allocated on first use of
FLAG/MARK action and will be allocated per domain to minimize
FW resource usage.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5.h                    |  2 -
 drivers/net/mlx5/mlx5_flow_hw.c            | 47 +++++------
 drivers/net/mlx5/mlx5_hws_global_actions.c | 92 ++++++++++++++++++----
 drivers/net/mlx5/mlx5_hws_global_actions.h |  5 ++
 4 files changed, 100 insertions(+), 46 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 54683cce7a..43553b1f35 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2115,8 +2115,6 @@ struct mlx5_priv {
 	struct mlx5dr_action **hw_vport;
 	/* HWS global actions. */
 	struct mlx5_hws_global_actions hw_global_actions;
-	/* HW steering global tag action. */
-	struct mlx5dr_action *hw_tag[2];
 	/* HW steering global default miss action. */
 	struct mlx5dr_action *hw_def_miss;
 	/* HW steering global send to kernel action. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 80e156f26a..54c30264b2 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2692,16 +2692,33 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 			acts->rule_acts[dr_pos].action = priv->hw_def_miss;
 			break;
 		case RTE_FLOW_ACTION_TYPE_FLAG:
+			dr_action = mlx5_hws_global_action_tag_get(priv, type, is_root);
+			if (dr_action == NULL) {
+				DRV_LOG(ERR, "port %u failed to allocate flag action",
+					priv->dev_data->port_id);
+				rte_flow_error_set(&sub_error, ENOMEM,
+						   RTE_FLOW_ERROR_TYPE_STATE, NULL,
+						   "failed to allocate flag action");
+				goto err;
+			}
 			acts->mark = true;
 			acts->rule_acts[dr_pos].tag.value =
 				mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
-			acts->rule_acts[dr_pos].action =
-				priv->hw_tag[!!attr->group];
+			acts->rule_acts[dr_pos].action = dr_action;
 			rte_atomic_fetch_add_explicit(&priv->hws_mark_refcnt, 1,
 					rte_memory_order_relaxed);
 			mlx5_flow_hw_rxq_flag_set(dev, true);
 			break;
 		case RTE_FLOW_ACTION_TYPE_MARK:
+			dr_action = mlx5_hws_global_action_tag_get(priv, type, is_root);
+			if (dr_action == NULL) {
+				DRV_LOG(ERR, "port %u failed to allocate mark action",
+					priv->dev_data->port_id);
+				rte_flow_error_set(&sub_error, ENOMEM,
+						   RTE_FLOW_ERROR_TYPE_STATE, NULL,
+						   "failed to allocate mark action");
+				goto err;
+			}
 			acts->mark = true;
 			if (masks->conf &&
 			    ((const struct rte_flow_action_mark *)
@@ -2714,8 +2731,7 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 								   actions->type,
 								   src_pos, dr_pos))
 				goto err;
-			acts->rule_acts[dr_pos].action =
-				priv->hw_tag[!!attr->group];
+			acts->rule_acts[dr_pos].action = dr_action;
 			rte_atomic_fetch_add_explicit(&priv->hws_mark_refcnt, 1,
 					rte_memory_order_relaxed);
 			mlx5_flow_hw_rxq_flag_set(dev, true);
@@ -11973,10 +11989,6 @@ __mlx5_flow_hw_resource_release(struct rte_eth_dev *dev, bool ctx_close)
 		claim_zero(flow_hw_actions_template_destroy(dev, at, NULL));
 		at = temp_at;
 	}
-	for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
-		if (priv->hw_tag[i])
-			mlx5dr_action_destroy(priv->hw_tag[i]);
-	}
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
 	flow_hw_destroy_nat64_actions(priv);
@@ -12351,25 +12363,6 @@ __flow_hw_configure(struct rte_eth_dev *dev,
 	if (port_attr->nb_meters || (host_priv && host_priv->hws_mpool))
 		if (mlx5_flow_meter_init(dev, port_attr->nb_meters, 0, 0, nb_q_updated))
 			goto err;
-	/* Add global actions. */
-	for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
-		uint32_t tag_flags = mlx5_hw_act_flag[i][0];
-		bool tag_fdb_rx = !!priv->sh->cdev->config.hca_attr.fdb_rx_set_flow_tag_stc;
-
-		if (is_proxy) {
-			if (unified_fdb) {
-				if (i == MLX5_HW_ACTION_FLAG_NONE_ROOT && tag_fdb_rx)
-					tag_flags |= mlx5_hw_act_flag[i][MLX5DR_TABLE_TYPE_FDB_RX];
-			} else {
-				if (i == MLX5_HW_ACTION_FLAG_NONE_ROOT && tag_fdb_rx)
-					tag_flags |= mlx5_hw_act_flag[i][MLX5DR_TABLE_TYPE_FDB];
-			}
-		}
-		priv->hw_tag[i] = mlx5dr_action_create_tag
-			(priv->dr_ctx, tag_flags);
-		if (!priv->hw_tag[i])
-			goto err;
-	}
 	if (priv->sh->config.dv_esw_en) {
 		ret = flow_hw_setup_tx_repr_tagging(dev, error);
 		if (ret)
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.c b/drivers/net/mlx5/mlx5_hws_global_actions.c
index 6af5497123..1ca444ce98 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.c
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.c
@@ -12,33 +12,63 @@ mlx5_hws_global_actions_init(struct mlx5_priv *priv)
 	rte_spinlock_init(&priv->hw_global_actions.lock);
 }
 
-void
-mlx5_hws_global_actions_cleanup(struct mlx5_priv *priv)
+static void
+global_actions_array_cleanup(struct mlx5_priv *priv,
+			     struct mlx5_hws_global_actions_array *array,
+			     const char *name)
 {
-	rte_spinlock_lock(&priv->hw_global_actions.lock);
-
 	for (int i = 0; i < MLX5_HWS_GLOBAL_ACTION_MAX; ++i) {
 		for (int j = 0; j < MLX5DR_TABLE_TYPE_MAX; ++j) {
 			int ret;
 
-			if (priv->hw_global_actions.drop.arr[i][j] == NULL)
+			if (array->arr[i][j] == NULL)
 				continue;
 
-			ret = mlx5dr_action_destroy(priv->hw_global_actions.drop.arr[i][j]);
+			ret = mlx5dr_action_destroy(array->arr[i][j]);
 			if (ret != 0)
-				DRV_LOG(ERR, "port %u failed to free HWS action",
-					priv->dev_data->port_id);
-			priv->hw_global_actions.drop.arr[i][j] = NULL;
+				DRV_LOG(ERR, "port %u failed to free %s HWS action",
+					priv->dev_data->port_id,
+					name);
+			array->arr[i][j] = NULL;
 		}
 	}
+}
+
+void
+mlx5_hws_global_actions_cleanup(struct mlx5_priv *priv)
+{
+	rte_spinlock_lock(&priv->hw_global_actions.lock);
+
+	global_actions_array_cleanup(priv, &priv->hw_global_actions.drop, "drop");
+	global_actions_array_cleanup(priv, &priv->hw_global_actions.tag, "tag");
 
 	rte_spinlock_unlock(&priv->hw_global_actions.lock);
 }
 
-struct mlx5dr_action *
-mlx5_hws_global_action_drop_get(struct mlx5_priv *priv,
-				enum mlx5dr_table_type table_type,
-				bool is_root)
+typedef struct mlx5dr_action *(*global_action_create_t)(struct mlx5dr_context *ctx,
+							uint32_t action_flags);
+
+static struct mlx5dr_action *
+action_create_drop_cb(struct mlx5dr_context *ctx,
+		      uint32_t action_flags)
+{
+	return mlx5dr_action_create_dest_drop(ctx, action_flags);
+}
+
+static struct mlx5dr_action *
+action_create_tag_cb(struct mlx5dr_context *ctx,
+		     uint32_t action_flags)
+{
+	return mlx5dr_action_create_tag(ctx, action_flags);
+}
+
+static struct mlx5dr_action *
+global_action_get(struct mlx5_priv *priv,
+		  struct mlx5_hws_global_actions_array *array,
+		  const char *name,
+		  enum mlx5dr_table_type table_type,
+		  bool is_root,
+		  global_action_create_t create_cb)
 {
 	enum mlx5dr_action_flags action_flags;
 	struct mlx5dr_action *action = NULL;
@@ -50,19 +80,47 @@ mlx5_hws_global_action_drop_get(struct mlx5_priv *priv,
 
 	rte_spinlock_lock(&priv->hw_global_actions.lock);
 
-	action = priv->hw_global_actions.drop.arr[!is_root][table_type];
+	action = array->arr[!is_root][table_type];
 	if (action != NULL)
 		goto unlock_ret;
 
-	action = mlx5dr_action_create_dest_drop(priv->dr_ctx, action_flags);
+	action = create_cb(priv->dr_ctx, action_flags);
 	if (action == NULL) {
-		DRV_LOG(ERR, "port %u failed to create drop HWS action", priv->dev_data->port_id);
+		DRV_LOG(ERR, "port %u failed to create %s HWS action",
+			priv->dev_data->port_id,
+			name);
 		goto unlock_ret;
 	}
 
-	priv->hw_global_actions.drop.arr[!is_root][table_type] = action;
+	array->arr[!is_root][table_type] = action;
 
 unlock_ret:
 	rte_spinlock_unlock(&priv->hw_global_actions.lock);
 	return action;
 }
+
+struct mlx5dr_action *
+mlx5_hws_global_action_drop_get(struct mlx5_priv *priv,
+				enum mlx5dr_table_type table_type,
+				bool is_root)
+{
+	return global_action_get(priv,
+				 &priv->hw_global_actions.drop,
+				 "drop",
+				 table_type,
+				 is_root,
+				 action_create_drop_cb);
+}
+
+struct mlx5dr_action *
+mlx5_hws_global_action_tag_get(struct mlx5_priv *priv,
+			       enum mlx5dr_table_type table_type,
+			       bool is_root)
+{
+	return global_action_get(priv,
+				 &priv->hw_global_actions.tag,
+				 "tag",
+				 table_type,
+				 is_root,
+				 action_create_tag_cb);
+}
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.h b/drivers/net/mlx5/mlx5_hws_global_actions.h
index 3921004102..bec9f3e0e8 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.h
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.h
@@ -25,6 +25,7 @@ struct mlx5_hws_global_actions_array {
 
 struct mlx5_hws_global_actions {
 	struct mlx5_hws_global_actions_array drop;
+	struct mlx5_hws_global_actions_array tag;
 	rte_spinlock_t lock;
 };
 
@@ -36,4 +37,8 @@ struct mlx5dr_action *mlx5_hws_global_action_drop_get(struct mlx5_priv *priv,
 						      enum mlx5dr_table_type table_type,
 						      bool is_root);
 
+struct mlx5dr_action *mlx5_hws_global_action_tag_get(struct mlx5_priv *priv,
+						     enum mlx5dr_table_type table_type,
+						     bool is_root);
+
 #endif /* !RTE_PMD_MLX5_HWS_GLOBAL_ACTIONS_H_ */
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/9] net/mlx5: lazily allocate HWS pop VLAN action
  2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
                   ` (3 preceding siblings ...)
  2026-02-25 11:59 ` [PATCH 4/9] net/mlx5: lazily allocate tag " Dariusz Sosnowski
@ 2026-02-25 11:59 ` Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 6/9] net/mlx5: lazily allocate HWS push " Dariusz Sosnowski
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dariusz Sosnowski @ 2026-02-25 11:59 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad
  Cc: dev, Raslan Darawsheh

HWS pop_vlan action is used to implement OF_POP_VLAN rte_flow action.
It was allocated either on port start or on rte_flow_configure().
This could cause unnecessary FW resource usage
if user did not use any OF_POP_VLAN action.

This patch extends global actions internal API,
introduced in previous commits, to allow lazy allocation
of HWS pop_vlan action. It will be allocated on first use
and will be allocated per domain to minimize FW resource usage.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5.h                    |  1 -
 drivers/net/mlx5/mlx5_flow_hw.c            | 20 ++++++++++----------
 drivers/net/mlx5/mlx5_hws_global_actions.c | 21 +++++++++++++++++++++
 drivers/net/mlx5/mlx5_hws_global_actions.h |  5 +++++
 4 files changed, 36 insertions(+), 11 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 43553b1f35..9e46a8cee8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2111,7 +2111,6 @@ struct mlx5_priv {
 	/* HW steering rte flow group list header */
 	LIST_HEAD(flow_hw_grp, mlx5_flow_group) flow_hw_grp;
 	struct mlx5dr_action *hw_push_vlan[MLX5DR_TABLE_TYPE_MAX];
-	struct mlx5dr_action *hw_pop_vlan[MLX5DR_TABLE_TYPE_MAX];
 	struct mlx5dr_action **hw_vport;
 	/* HWS global actions. */
 	struct mlx5_hws_global_actions hw_global_actions;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 54c30264b2..21c61bce90 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2753,8 +2753,16 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 			masks += of_vlan_offset;
 			break;
 		case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
-			acts->rule_acts[dr_pos].action =
-				priv->hw_pop_vlan[type];
+			dr_action = mlx5_hws_global_action_pop_vlan_get(priv, type, is_root);
+			if (dr_action == NULL) {
+				DRV_LOG(ERR, "port %u failed to allocate pop VLAN action",
+					priv->dev_data->port_id);
+				rte_flow_error_set(&sub_error, ENOMEM,
+						   RTE_FLOW_ERROR_TYPE_STATE, NULL,
+						   "failed to allocate pop VLAN action");
+				goto err;
+			}
+			acts->rule_acts[dr_pos].action = dr_action;
 			break;
 		case RTE_FLOW_ACTION_TYPE_JUMP:
 			if (masks->conf &&
@@ -11377,10 +11385,6 @@ flow_hw_destroy_vlan(struct rte_eth_dev *dev)
 	enum mlx5dr_table_type i;
 
 	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
-		if (priv->hw_pop_vlan[i]) {
-			mlx5dr_action_destroy(priv->hw_pop_vlan[i]);
-			priv->hw_pop_vlan[i] = NULL;
-		}
 		if (priv->hw_push_vlan[i]) {
 			mlx5dr_action_destroy(priv->hw_push_vlan[i]);
 			priv->hw_push_vlan[i] = NULL;
@@ -11401,10 +11405,6 @@ _create_vlan(struct mlx5_priv *priv, enum mlx5dr_table_type type)
 	};
 
 	/* rte_errno is set in the mlx5dr_action* functions. */
-	priv->hw_pop_vlan[type] =
-		mlx5dr_action_create_pop_vlan(priv->dr_ctx, flags[type]);
-	if (!priv->hw_pop_vlan[type])
-		return -rte_errno;
 	priv->hw_push_vlan[type] =
 		mlx5dr_action_create_push_vlan(priv->dr_ctx, flags[type]);
 	if (!priv->hw_push_vlan[type])
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.c b/drivers/net/mlx5/mlx5_hws_global_actions.c
index 1ca444ce98..236e6f1d1a 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.c
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.c
@@ -41,6 +41,7 @@ mlx5_hws_global_actions_cleanup(struct mlx5_priv *priv)
 
 	global_actions_array_cleanup(priv, &priv->hw_global_actions.drop, "drop");
 	global_actions_array_cleanup(priv, &priv->hw_global_actions.tag, "tag");
+	global_actions_array_cleanup(priv, &priv->hw_global_actions.pop_vlan, "pop_vlan");
 
 	rte_spinlock_unlock(&priv->hw_global_actions.lock);
 }
@@ -62,6 +63,13 @@ action_create_tag_cb(struct mlx5dr_context *ctx,
 	return mlx5dr_action_create_tag(ctx, action_flags);
 }
 
+static struct mlx5dr_action *
+action_create_pop_vlan_cb(struct mlx5dr_context *ctx,
+			  uint32_t action_flags)
+{
+	return mlx5dr_action_create_pop_vlan(ctx, action_flags);
+}
+
 static struct mlx5dr_action *
 global_action_get(struct mlx5_priv *priv,
 		  struct mlx5_hws_global_actions_array *array,
@@ -124,3 +132,16 @@ mlx5_hws_global_action_tag_get(struct mlx5_priv *priv,
 				 is_root,
 				 action_create_tag_cb);
 }
+
+struct mlx5dr_action *
+mlx5_hws_global_action_pop_vlan_get(struct mlx5_priv *priv,
+				    enum mlx5dr_table_type table_type,
+				    bool is_root)
+{
+	return global_action_get(priv,
+				 &priv->hw_global_actions.pop_vlan,
+				 "pop_vlan",
+				 table_type,
+				 is_root,
+				 action_create_pop_vlan_cb);
+}
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.h b/drivers/net/mlx5/mlx5_hws_global_actions.h
index bec9f3e0e8..d04ebc42be 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.h
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.h
@@ -26,6 +26,7 @@ struct mlx5_hws_global_actions_array {
 struct mlx5_hws_global_actions {
 	struct mlx5_hws_global_actions_array drop;
 	struct mlx5_hws_global_actions_array tag;
+	struct mlx5_hws_global_actions_array pop_vlan;
 	rte_spinlock_t lock;
 };
 
@@ -41,4 +42,8 @@ struct mlx5dr_action *mlx5_hws_global_action_tag_get(struct mlx5_priv *priv,
 						     enum mlx5dr_table_type table_type,
 						     bool is_root);
 
+struct mlx5dr_action *mlx5_hws_global_action_pop_vlan_get(struct mlx5_priv *priv,
+							  enum mlx5dr_table_type table_type,
+							  bool is_root);
+
 #endif /* !RTE_PMD_MLX5_HWS_GLOBAL_ACTIONS_H_ */
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/9] net/mlx5: lazily allocate HWS push VLAN action
  2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
                   ` (4 preceding siblings ...)
  2026-02-25 11:59 ` [PATCH 5/9] net/mlx5: lazily allocate HWS pop VLAN action Dariusz Sosnowski
@ 2026-02-25 11:59 ` Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 7/9] net/mlx5: lazily allocate HWS send to kernel action Dariusz Sosnowski
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dariusz Sosnowski @ 2026-02-25 11:59 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad
  Cc: dev, Raslan Darawsheh

HWS push_vlan action is used to implement OF_PUSH_VLAN rte_flow action.
It was allocated either on port start or on rte_flow_configure().
This could cause unnecessary FW resource usage
if user did not use any OF_PUSH_VLAN action.

This patch extends global actions internal API,
introduced in previous commits, to allow lazy allocation
of HWS push_vlan action. It will be allocated on first use
and will be allocated per domain to minimize FW resource usage.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5.h                    |  1 -
 drivers/net/mlx5/mlx5_flow_hw.c            | 78 +++-------------------
 drivers/net/mlx5/mlx5_hws_global_actions.c | 21 ++++++
 drivers/net/mlx5/mlx5_hws_global_actions.h |  5 ++
 4 files changed, 36 insertions(+), 69 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9e46a8cee8..94b4cb0d7b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2110,7 +2110,6 @@ struct mlx5_priv {
 	LIST_HEAD(flow_hw_tbl, rte_flow_template_table) flow_hw_tbl;
 	/* HW steering rte flow group list header */
 	LIST_HEAD(flow_hw_grp, mlx5_flow_group) flow_hw_grp;
-	struct mlx5dr_action *hw_push_vlan[MLX5DR_TABLE_TYPE_MAX];
 	struct mlx5dr_action **hw_vport;
 	/* HWS global actions. */
 	struct mlx5_hws_global_actions hw_global_actions;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 21c61bce90..2ecae1b7e7 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2737,8 +2737,16 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 			mlx5_flow_hw_rxq_flag_set(dev, true);
 			break;
 		case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
-			acts->rule_acts[dr_pos].action =
-				priv->hw_push_vlan[type];
+			dr_action = mlx5_hws_global_action_push_vlan_get(priv, type, is_root);
+			if (dr_action == NULL) {
+				DRV_LOG(ERR, "port %u failed to allocate push VLAN action",
+					priv->dev_data->port_id);
+				rte_flow_error_set(&sub_error, ENOMEM,
+						   RTE_FLOW_ERROR_TYPE_STATE, NULL,
+						   "failed to allocate push VLAN action");
+				goto err;
+			}
+			acts->rule_acts[dr_pos].action = dr_action;
 			if (is_template_masked_push_vlan(masks->conf))
 				acts->rule_acts[dr_pos].push_vlan.vlan_hdr =
 					vlan_hdr_to_be32(actions);
@@ -11378,65 +11386,6 @@ mlx5_flow_ct_init(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static void
-flow_hw_destroy_vlan(struct rte_eth_dev *dev)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	enum mlx5dr_table_type i;
-
-	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
-		if (priv->hw_push_vlan[i]) {
-			mlx5dr_action_destroy(priv->hw_push_vlan[i]);
-			priv->hw_push_vlan[i] = NULL;
-		}
-	}
-}
-
-static int
-_create_vlan(struct mlx5_priv *priv, enum mlx5dr_table_type type)
-{
-	const enum mlx5dr_action_flags flags[MLX5DR_TABLE_TYPE_MAX] = {
-		MLX5DR_ACTION_FLAG_HWS_RX,
-		MLX5DR_ACTION_FLAG_HWS_TX,
-		MLX5DR_ACTION_FLAG_HWS_FDB,
-		MLX5DR_ACTION_FLAG_HWS_FDB_RX,
-		MLX5DR_ACTION_FLAG_HWS_FDB_TX,
-		MLX5DR_ACTION_FLAG_HWS_FDB_UNIFIED,
-	};
-
-	/* rte_errno is set in the mlx5dr_action* functions. */
-	priv->hw_push_vlan[type] =
-		mlx5dr_action_create_push_vlan(priv->dr_ctx, flags[type]);
-	if (!priv->hw_push_vlan[type])
-		return -rte_errno;
-	return 0;
-}
-
-static int
-flow_hw_create_vlan(struct rte_eth_dev *dev)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	enum mlx5dr_table_type i, from, to;
-	int rc;
-	bool unified_fdb = is_unified_fdb(priv);
-
-	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i <= MLX5DR_TABLE_TYPE_NIC_TX; i++) {
-		rc = _create_vlan(priv, i);
-		if (rc)
-			return rc;
-	}
-	from = unified_fdb ? MLX5DR_TABLE_TYPE_FDB_RX : MLX5DR_TABLE_TYPE_FDB;
-	to = unified_fdb ? MLX5DR_TABLE_TYPE_FDB_UNIFIED : MLX5DR_TABLE_TYPE_FDB;
-	if (priv->sh->config.dv_esw_en && priv->master) {
-		for (i = from; i <= to; i++) {
-			rc = _create_vlan(priv, i);
-			if (rc)
-				return rc;
-		}
-	}
-	return 0;
-}
-
 void
 mlx5_flow_hw_cleanup_ctrl_rx_tables(struct rte_eth_dev *dev)
 {
@@ -11992,7 +11941,6 @@ __mlx5_flow_hw_resource_release(struct rte_eth_dev *dev, bool ctx_close)
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
 	flow_hw_destroy_nat64_actions(priv);
-	flow_hw_destroy_vlan(dev);
 	flow_hw_destroy_send_to_kernel_action(priv);
 	flow_hw_free_vport_actions(priv);
 	if (priv->acts_ipool) {
@@ -12452,12 +12400,6 @@ __flow_hw_configure(struct rte_eth_dev *dev,
 		if (ret < 0)
 			goto err;
 	}
-	ret = flow_hw_create_vlan(dev);
-	if (ret) {
-		rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-				   NULL, "Failed to VLAN actions.");
-		goto err;
-	}
 	if (flow_hw_should_create_nat64_actions(priv)) {
 		if (flow_hw_create_nat64_actions(priv, error))
 			goto err;
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.c b/drivers/net/mlx5/mlx5_hws_global_actions.c
index 236e6f1d1a..2bbfa5a24c 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.c
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.c
@@ -42,6 +42,7 @@ mlx5_hws_global_actions_cleanup(struct mlx5_priv *priv)
 	global_actions_array_cleanup(priv, &priv->hw_global_actions.drop, "drop");
 	global_actions_array_cleanup(priv, &priv->hw_global_actions.tag, "tag");
 	global_actions_array_cleanup(priv, &priv->hw_global_actions.pop_vlan, "pop_vlan");
+	global_actions_array_cleanup(priv, &priv->hw_global_actions.push_vlan, "push_vlan");
 
 	rte_spinlock_unlock(&priv->hw_global_actions.lock);
 }
@@ -70,6 +71,13 @@ action_create_pop_vlan_cb(struct mlx5dr_context *ctx,
 	return mlx5dr_action_create_pop_vlan(ctx, action_flags);
 }
 
+static struct mlx5dr_action *
+action_create_push_vlan_cb(struct mlx5dr_context *ctx,
+			   uint32_t action_flags)
+{
+	return mlx5dr_action_create_push_vlan(ctx, action_flags);
+}
+
 static struct mlx5dr_action *
 global_action_get(struct mlx5_priv *priv,
 		  struct mlx5_hws_global_actions_array *array,
@@ -145,3 +153,16 @@ mlx5_hws_global_action_pop_vlan_get(struct mlx5_priv *priv,
 				 is_root,
 				 action_create_pop_vlan_cb);
 }
+
+struct mlx5dr_action *
+mlx5_hws_global_action_push_vlan_get(struct mlx5_priv *priv,
+				     enum mlx5dr_table_type table_type,
+				     bool is_root)
+{
+	return global_action_get(priv,
+				 &priv->hw_global_actions.push_vlan,
+				 "push_vlan",
+				 table_type,
+				 is_root,
+				 action_create_push_vlan_cb);
+}
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.h b/drivers/net/mlx5/mlx5_hws_global_actions.h
index d04ebc42be..4281ba701c 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.h
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.h
@@ -27,6 +27,7 @@ struct mlx5_hws_global_actions {
 	struct mlx5_hws_global_actions_array drop;
 	struct mlx5_hws_global_actions_array tag;
 	struct mlx5_hws_global_actions_array pop_vlan;
+	struct mlx5_hws_global_actions_array push_vlan;
 	rte_spinlock_t lock;
 };
 
@@ -46,4 +47,8 @@ struct mlx5dr_action *mlx5_hws_global_action_pop_vlan_get(struct mlx5_priv *priv
 							  enum mlx5dr_table_type table_type,
 							  bool is_root);
 
+struct mlx5dr_action *mlx5_hws_global_action_push_vlan_get(struct mlx5_priv *priv,
+							   enum mlx5dr_table_type table_type,
+							   bool is_root);
+
 #endif /* !RTE_PMD_MLX5_HWS_GLOBAL_ACTIONS_H_ */
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 7/9] net/mlx5: lazily allocate HWS send to kernel action
  2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
                   ` (5 preceding siblings ...)
  2026-02-25 11:59 ` [PATCH 6/9] net/mlx5: lazily allocate HWS push " Dariusz Sosnowski
@ 2026-02-25 11:59 ` Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 8/9] net/mlx5: lazily allocate HWS NAT64 action Dariusz Sosnowski
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dariusz Sosnowski @ 2026-02-25 11:59 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad
  Cc: dev, Raslan Darawsheh

HWS send_to_kernel action is used to implement
SEND_TO_KERNEL rte_flow action.
It was allocated either on port start or on rte_flow_configure().
This could cause unnecessary FW resource usage
if user did not use any SEND_TO_KERNEL action.

This patch extends global actions internal API,
introduced in previous commits, to allow lazy allocation
of HWS send_to_kernel action. It will be allocated on first use
and will be allocated per domain to minimize FW resource usage.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5.h                    |  2 -
 drivers/net/mlx5/mlx5_flow_hw.c            | 90 ++++------------------
 drivers/net/mlx5/mlx5_hws_global_actions.c | 59 +++++++++++---
 drivers/net/mlx5/mlx5_hws_global_actions.h |  5 ++
 4 files changed, 66 insertions(+), 90 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 94b4cb0d7b..739b414faf 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2115,8 +2115,6 @@ struct mlx5_priv {
 	struct mlx5_hws_global_actions hw_global_actions;
 	/* HW steering global default miss action. */
 	struct mlx5dr_action *hw_def_miss;
-	/* HW steering global send to kernel action. */
-	struct mlx5dr_action *hw_send_to_kernel[MLX5DR_TABLE_TYPE_MAX];
 	/* HW steering create ongoing rte flow table list header. */
 	LIST_HEAD(flow_hw_tbl_ongo, rte_flow_template_table) flow_hw_tbl_ongo;
 	struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 2ecae1b7e7..7fafe3fe6a 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2913,14 +2913,24 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 			break;
 		case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL:
 			if (is_root) {
-				__flow_hw_action_template_destroy(dev, acts);
 				rte_flow_error_set(&sub_error, ENOTSUP,
 					RTE_FLOW_ERROR_TYPE_ACTION,
 					NULL,
 					"Send to kernel action on root table is not supported in HW steering mode");
 				goto err;
 			}
-			acts->rule_acts[dr_pos].action = priv->hw_send_to_kernel[type];
+			dr_action = mlx5_hws_global_action_send_to_kernel_get(priv,
+					type,
+					MLX5_HW_LOWEST_PRIO_ROOT);
+			if (dr_action == NULL) {
+				DRV_LOG(ERR, "port %u failed to allocate send to kernel action",
+					priv->dev_data->port_id);
+				rte_flow_error_set(&sub_error, ENOMEM,
+						   RTE_FLOW_ERROR_TYPE_STATE, NULL,
+						   "failed to allocate send to kernel action");
+				goto err;
+			}
+			acts->rule_acts[dr_pos].action = dr_action;
 			break;
 		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
 			err = flow_hw_modify_field_compile(dev, attr, actions,
@@ -7366,36 +7376,14 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 			action_flags |= MLX5_FLOW_ACTION_JUMP;
 			break;
 #ifdef HAVE_MLX5DV_DR_ACTION_CREATE_DEST_ROOT_TABLE
-		case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: {
-			bool res;
-
+		case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL:
 			if (priv->shared_host)
 				return rte_flow_error_set(error, ENOTSUP,
 							  RTE_FLOW_ERROR_TYPE_ACTION,
 							  action,
 							  "action not supported in guest port");
-			if (attr->ingress) {
-				res = priv->hw_send_to_kernel[MLX5DR_TABLE_TYPE_NIC_RX];
-			} else if (attr->egress) {
-				res = priv->hw_send_to_kernel[MLX5DR_TABLE_TYPE_NIC_TX];
-			} else {
-				if (!is_unified_fdb(priv))
-					res = priv->hw_send_to_kernel[MLX5DR_TABLE_TYPE_FDB];
-				else
-					res =
-					    priv->hw_send_to_kernel[MLX5DR_TABLE_TYPE_FDB_RX] &&
-					    priv->hw_send_to_kernel[MLX5DR_TABLE_TYPE_FDB_TX] &&
-					    priv->hw_send_to_kernel[MLX5DR_TABLE_TYPE_FDB_UNIFIED];
-			}
-			if (!res)
-				return rte_flow_error_set(error, ENOTSUP,
-							  RTE_FLOW_ERROR_TYPE_ACTION,
-							  action,
-							  "action is not available");
-
 			action_flags |= MLX5_FLOW_ACTION_SEND_TO_KERNEL;
 			break;
-		}
 #endif
 		case RTE_FLOW_ACTION_TYPE_QUEUE:
 			ret = mlx5_hw_validate_action_queue(dev, action, mask,
@@ -9891,55 +9879,6 @@ flow_hw_free_vport_actions(struct mlx5_priv *priv)
 	priv->hw_vport = NULL;
 }
 
-#ifdef HAVE_MLX5DV_DR_ACTION_CREATE_DEST_ROOT_TABLE
-static __rte_always_inline void
-_create_send_to_kernel_actions(struct mlx5_priv *priv, int type)
-{
-	int action_flag;
-
-	action_flag = mlx5_hw_act_flag[1][type];
-	priv->hw_send_to_kernel[type] =
-		mlx5dr_action_create_dest_root(priv->dr_ctx,
-				MLX5_HW_LOWEST_PRIO_ROOT,
-				action_flag);
-	if (!priv->hw_send_to_kernel[type])
-		DRV_LOG(WARNING, "Unable to create HWS send to kernel action");
-}
-#endif
-
-static void
-flow_hw_create_send_to_kernel_actions(__rte_unused struct mlx5_priv *priv,
-				      __rte_unused bool is_proxy)
-{
-#ifdef HAVE_MLX5DV_DR_ACTION_CREATE_DEST_ROOT_TABLE
-	int i, from, to;
-	bool unified_fdb = is_unified_fdb(priv);
-
-	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i <= MLX5DR_TABLE_TYPE_NIC_TX; i++)
-		_create_send_to_kernel_actions(priv, i);
-
-	if (is_proxy) {
-		from = unified_fdb ? MLX5DR_TABLE_TYPE_FDB_RX : MLX5DR_TABLE_TYPE_FDB;
-		to = unified_fdb ? MLX5DR_TABLE_TYPE_FDB_UNIFIED : MLX5DR_TABLE_TYPE_FDB;
-		for (i = from; i <= to; i++)
-			_create_send_to_kernel_actions(priv, i);
-	}
-#endif
-}
-
-static void
-flow_hw_destroy_send_to_kernel_action(struct mlx5_priv *priv)
-{
-	int i;
-
-	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
-		if (priv->hw_send_to_kernel[i]) {
-			mlx5dr_action_destroy(priv->hw_send_to_kernel[i]);
-			priv->hw_send_to_kernel[i] = NULL;
-		}
-	}
-}
-
 static bool
 flow_hw_should_create_nat64_actions(struct mlx5_priv *priv)
 {
@@ -11941,7 +11880,6 @@ __mlx5_flow_hw_resource_release(struct rte_eth_dev *dev, bool ctx_close)
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
 	flow_hw_destroy_nat64_actions(priv);
-	flow_hw_destroy_send_to_kernel_action(priv);
 	flow_hw_free_vport_actions(priv);
 	if (priv->acts_ipool) {
 		mlx5_ipool_destroy(priv->acts_ipool);
@@ -12360,8 +12298,6 @@ __flow_hw_configure(struct rte_eth_dev *dev,
 			goto err;
 		}
 	}
-	if (!priv->shared_host)
-		flow_hw_create_send_to_kernel_actions(priv, is_proxy);
 	if (port_attr->nb_conn_tracks || (host_priv && host_priv->hws_ctpool)) {
 		if (mlx5_flow_ct_init(dev, port_attr->nb_conn_tracks, nb_q_updated))
 			goto err;
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.c b/drivers/net/mlx5/mlx5_hws_global_actions.c
index 2bbfa5a24c..d8b21a67f1 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.c
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.c
@@ -43,48 +43,67 @@ mlx5_hws_global_actions_cleanup(struct mlx5_priv *priv)
 	global_actions_array_cleanup(priv, &priv->hw_global_actions.tag, "tag");
 	global_actions_array_cleanup(priv, &priv->hw_global_actions.pop_vlan, "pop_vlan");
 	global_actions_array_cleanup(priv, &priv->hw_global_actions.push_vlan, "push_vlan");
+	global_actions_array_cleanup(priv,
+				     &priv->hw_global_actions.send_to_kernel,
+				     "send_to_kernel");
 
 	rte_spinlock_unlock(&priv->hw_global_actions.lock);
 }
 
 typedef struct mlx5dr_action *(*global_action_create_t)(struct mlx5dr_context *ctx,
-							uint32_t action_flags);
+							uint32_t action_flags,
+							void *user_data);
 
 static struct mlx5dr_action *
 action_create_drop_cb(struct mlx5dr_context *ctx,
-		      uint32_t action_flags)
+		      uint32_t action_flags,
+		      void *user_data __rte_unused)
 {
 	return mlx5dr_action_create_dest_drop(ctx, action_flags);
 }
 
 static struct mlx5dr_action *
 action_create_tag_cb(struct mlx5dr_context *ctx,
-		     uint32_t action_flags)
+		     uint32_t action_flags,
+		     void *user_data __rte_unused)
 {
 	return mlx5dr_action_create_tag(ctx, action_flags);
 }
 
 static struct mlx5dr_action *
 action_create_pop_vlan_cb(struct mlx5dr_context *ctx,
-			  uint32_t action_flags)
+			  uint32_t action_flags,
+			  void *user_data __rte_unused)
 {
 	return mlx5dr_action_create_pop_vlan(ctx, action_flags);
 }
 
 static struct mlx5dr_action *
 action_create_push_vlan_cb(struct mlx5dr_context *ctx,
-			   uint32_t action_flags)
+			   uint32_t action_flags,
+			   void *user_data __rte_unused)
 {
 	return mlx5dr_action_create_push_vlan(ctx, action_flags);
 }
 
+static struct mlx5dr_action *
+action_create_send_to_kernel_cb(struct mlx5dr_context *ctx,
+				uint32_t action_flags,
+				void *user_data)
+{
+	uint16_t priority = (uint16_t)(uintptr_t)user_data;
+
+	return mlx5dr_action_create_dest_root(ctx, priority, action_flags);
+}
+
 static struct mlx5dr_action *
 global_action_get(struct mlx5_priv *priv,
 		  struct mlx5_hws_global_actions_array *array,
 		  const char *name,
 		  enum mlx5dr_table_type table_type,
 		  bool is_root,
-		  global_action_create_t create_cb)
+		  global_action_create_t create_cb,
+		  void *user_data)
 {
 	enum mlx5dr_action_flags action_flags;
 	struct mlx5dr_action *action = NULL;
@@ -100,7 +119,7 @@ global_action_get(struct mlx5_priv *priv,
 	if (action != NULL)
 		goto unlock_ret;
 
-	action = create_cb(priv->dr_ctx, action_flags);
+	action = create_cb(priv->dr_ctx, action_flags, user_data);
 	if (action == NULL) {
 		DRV_LOG(ERR, "port %u failed to create %s HWS action",
 			priv->dev_data->port_id,
@@ -125,7 +144,8 @@ mlx5_hws_global_action_drop_get(struct mlx5_priv *priv,
 				 "drop",
 				 table_type,
 				 is_root,
-				 action_create_drop_cb);
+				 action_create_drop_cb,
+				 NULL);
 }
 
 struct mlx5dr_action *
@@ -138,7 +158,8 @@ mlx5_hws_global_action_tag_get(struct mlx5_priv *priv,
 				 "tag",
 				 table_type,
 				 is_root,
-				 action_create_tag_cb);
+				 action_create_tag_cb,
+				 NULL);
 }
 
 struct mlx5dr_action *
@@ -151,7 +172,8 @@ mlx5_hws_global_action_pop_vlan_get(struct mlx5_priv *priv,
 				 "pop_vlan",
 				 table_type,
 				 is_root,
-				 action_create_pop_vlan_cb);
+				 action_create_pop_vlan_cb,
+				 NULL);
 }
 
 struct mlx5dr_action *
@@ -164,5 +186,20 @@ mlx5_hws_global_action_push_vlan_get(struct mlx5_priv *priv,
 				 "push_vlan",
 				 table_type,
 				 is_root,
-				 action_create_push_vlan_cb);
+				 action_create_push_vlan_cb,
+				 NULL);
+}
+
+struct mlx5dr_action *
+mlx5_hws_global_action_send_to_kernel_get(struct mlx5_priv *priv,
+					   enum mlx5dr_table_type table_type,
+					   uint16_t priority)
+{
+	return global_action_get(priv,
+				 &priv->hw_global_actions.send_to_kernel,
+				 "send_to_kernel",
+				 table_type,
+				 false, /* send-to-kernel is non-root only */
+				 action_create_send_to_kernel_cb,
+				 (void *)(uintptr_t)priority);
 }
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.h b/drivers/net/mlx5/mlx5_hws_global_actions.h
index 4281ba701c..7fbca9fc96 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.h
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.h
@@ -28,6 +28,7 @@ struct mlx5_hws_global_actions {
 	struct mlx5_hws_global_actions_array tag;
 	struct mlx5_hws_global_actions_array pop_vlan;
 	struct mlx5_hws_global_actions_array push_vlan;
+	struct mlx5_hws_global_actions_array send_to_kernel;
 	rte_spinlock_t lock;
 };
 
@@ -51,4 +52,8 @@ struct mlx5dr_action *mlx5_hws_global_action_push_vlan_get(struct mlx5_priv *pri
 							   enum mlx5dr_table_type table_type,
 							   bool is_root);
 
+struct mlx5dr_action *mlx5_hws_global_action_send_to_kernel_get(struct mlx5_priv *priv,
+								 enum mlx5dr_table_type table_type,
+								 uint16_t priority);
+
 #endif /* !RTE_PMD_MLX5_HWS_GLOBAL_ACTIONS_H_ */
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 8/9] net/mlx5: lazily allocate HWS NAT64 action
  2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
                   ` (6 preceding siblings ...)
  2026-02-25 11:59 ` [PATCH 7/9] net/mlx5: lazily allocate HWS send to kernel action Dariusz Sosnowski
@ 2026-02-25 11:59 ` Dariusz Sosnowski
  2026-02-25 11:59 ` [PATCH 9/9] net/mlx5: lazily allocate HWS default miss action Dariusz Sosnowski
  2026-03-02 11:22 ` [PATCH 0/9] net/mlx5: lazily allocate HWS actions Raslan Darawsheh
  9 siblings, 0 replies; 11+ messages in thread
From: Dariusz Sosnowski @ 2026-02-25 11:59 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad
  Cc: dev, Raslan Darawsheh

HWS NAT64 action is used to implement NAT64 rte_flow action.
It was allocated either on port start or on rte_flow_configure().
This could cause unnecessary FW resource usage
if user did not use any NAT64 action.

This patch extends global actions internal API,
introduced in previous commits, to allow lazy allocation
of HWS NAT64 action. It will be allocated on first use
and will be allocated per domain to minimize FW resource usage.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5.h                    |   6 -
 drivers/net/mlx5/mlx5_flow.h               |   1 +
 drivers/net/mlx5/mlx5_flow_hw.c            | 215 ++++-----------------
 drivers/net/mlx5/mlx5_hws_global_actions.c |  45 +++++
 drivers/net/mlx5/mlx5_hws_global_actions.h |   7 +
 5 files changed, 94 insertions(+), 180 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 739b414faf..75e61d6b5b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2127,12 +2127,6 @@ struct mlx5_priv {
 
 	struct rte_flow_actions_template *action_template_drop[MLX5DR_TABLE_TYPE_MAX];
 
-	/*
-	 * The NAT64 action can be shared among matchers per domain.
-	 * [0]: RTE_FLOW_NAT64_6TO4, [1]: RTE_FLOW_NAT64_4TO6
-	 * Todo: consider to add *_MAX macro.
-	 */
-	struct mlx5dr_action *action_nat64[MLX5DR_TABLE_TYPE_MAX][2];
 	struct mlx5_indexed_pool *ptype_rss_groups;
 	struct mlx5_nta_sample_ctx *nta_sample_ctx;
 #endif
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index c8af3fe0be..4c56e638ab 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1625,6 +1625,7 @@ struct mlx5_hw_actions {
 	uint32_t mark:1; /* Indicate the mark action. */
 	cnt_id_t cnt_id; /* Counter id. */
 	uint32_t mtr_id; /* Meter id. */
+	struct mlx5dr_action *nat64[2]; /* [RTE_FLOW_NAT64_6TO4], [RTE_FLOW_NAT64_4TO6] */
 	/* Translated DR action array from action template. */
 	struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS];
 };
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7fafe3fe6a..c74705be8f 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -369,6 +369,7 @@ static int flow_hw_async_destroy_validate(struct rte_eth_dev *dev,
 					  const uint32_t queue,
 					  const struct rte_flow_hw *flow,
 					  struct rte_flow_error *error);
+static bool flow_hw_should_create_nat64_actions(struct mlx5_priv *priv);
 
 const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops;
 
@@ -3023,12 +3024,38 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 			    ((const struct rte_flow_action_nat64 *)masks->conf)->type) {
 				const struct rte_flow_action_nat64 *nat64_c =
 					(const struct rte_flow_action_nat64 *)actions->conf;
-
-				acts->rule_acts[dr_pos].action =
-					priv->action_nat64[type][nat64_c->type];
-			} else if (__flow_hw_act_data_general_append(priv, acts,
-								     actions->type,
-								     src_pos, dr_pos))
+				dr_action = mlx5_hws_global_action_nat64_get(priv,
+									     type,
+									     nat64_c->type);
+				if (dr_action == NULL) {
+					DRV_LOG(ERR, "port %u failed to allocate NAT64 action",
+						priv->dev_data->port_id);
+					rte_flow_error_set(&sub_error, ENOMEM,
+							   RTE_FLOW_ERROR_TYPE_STATE, NULL,
+							   "failed to allocate NAT64 action");
+					goto err;
+				}
+				acts->rule_acts[dr_pos].action = dr_action;
+				break;
+			}
+			acts->nat64[RTE_FLOW_NAT64_6TO4] = mlx5_hws_global_action_nat64_get(priv,
+					type,
+					RTE_FLOW_NAT64_6TO4);
+			acts->nat64[RTE_FLOW_NAT64_4TO6] = mlx5_hws_global_action_nat64_get(priv,
+					type,
+					RTE_FLOW_NAT64_4TO6);
+			if (!acts->nat64[RTE_FLOW_NAT64_6TO4] ||
+			    !acts->nat64[RTE_FLOW_NAT64_4TO6]) {
+				DRV_LOG(ERR, "port %u failed to allocate both NAT64 actions",
+					priv->dev_data->port_id);
+				rte_flow_error_set(&sub_error, ENOMEM,
+						   RTE_FLOW_ERROR_TYPE_STATE, NULL,
+						   "failed to allocate both NAT64 actions");
+				goto err;
+			}
+			if (__flow_hw_act_data_general_append(priv, acts,
+							      actions->type,
+							      src_pos, dr_pos))
 				goto err;
 			break;
 		case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX:
@@ -3894,9 +3921,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			break;
 		case RTE_FLOW_ACTION_TYPE_NAT64:
 			nat64_c = action->conf;
-			MLX5_ASSERT(table->type < MLX5DR_TABLE_TYPE_MAX);
-			rule_acts[act_data->action_dst].action =
-				priv->action_nat64[table->type][nat64_c->type];
+			rule_acts[act_data->action_dst].action = hw_acts->nat64[nat64_c->type];
 			break;
 		case RTE_FLOW_ACTION_TYPE_JUMP_TO_TABLE_INDEX:
 			jump_table = ((const struct rte_flow_action_jump_to_table_index *)
@@ -6916,76 +6941,16 @@ flow_hw_validate_action_default_miss(struct rte_eth_dev *dev,
 }
 
 static int
-flow_hw_validate_action_nat64(struct rte_eth_dev *dev,
-			      const struct rte_flow_actions_template_attr *attr,
-			      const struct rte_flow_action *action,
-			      const struct rte_flow_action *mask,
-			      uint64_t action_flags,
-			      struct rte_flow_error *error)
+flow_hw_validate_action_nat64(struct rte_eth_dev *dev, struct rte_flow_error *error)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	const struct rte_flow_action_nat64 *nat64_c;
-	enum rte_flow_nat64_type cov_type;
 
-	RTE_SET_USED(action_flags);
-	if (mask->conf && ((const struct rte_flow_action_nat64 *)mask->conf)->type) {
-		nat64_c = (const struct rte_flow_action_nat64 *)action->conf;
-		cov_type = nat64_c->type;
-		if ((attr->ingress && !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][cov_type]) ||
-		    (attr->egress && !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][cov_type]))
-			goto err_out;
-		if (attr->transfer) {
-			if (!is_unified_fdb(priv)) {
-				if (!priv->action_nat64[MLX5DR_TABLE_TYPE_FDB][cov_type])
-					goto err_out;
-			} else {
-				if (!priv->action_nat64[MLX5DR_TABLE_TYPE_FDB_RX][cov_type] ||
-				    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB_TX][cov_type] ||
-				    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB_UNIFIED][cov_type])
-					goto err_out;
-			}
-		}
-	} else {
-		/*
-		 * Usually, the actions will be used on both directions. For non-masked actions,
-		 * both directions' actions will be checked.
-		 */
-		if (attr->ingress)
-			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][RTE_FLOW_NAT64_6TO4] ||
-			    !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_RX][RTE_FLOW_NAT64_4TO6])
-				goto err_out;
-		if (attr->egress)
-			if (!priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][RTE_FLOW_NAT64_6TO4] ||
-			    !priv->action_nat64[MLX5DR_TABLE_TYPE_NIC_TX][RTE_FLOW_NAT64_4TO6])
-				goto err_out;
-		if (attr->transfer) {
-			if (!is_unified_fdb(priv)) {
-				if (!priv->action_nat64[MLX5DR_TABLE_TYPE_FDB]
-						       [RTE_FLOW_NAT64_6TO4] ||
-				    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB]
-						       [RTE_FLOW_NAT64_4TO6])
-					goto err_out;
-			} else {
-				if (!priv->action_nat64[MLX5DR_TABLE_TYPE_FDB_RX]
-						       [RTE_FLOW_NAT64_6TO4] ||
-				    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB_RX]
-						       [RTE_FLOW_NAT64_4TO6] ||
-				    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB_TX]
-						       [RTE_FLOW_NAT64_6TO4] ||
-				    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB_TX]
-						       [RTE_FLOW_NAT64_4TO6] ||
-				    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB_UNIFIED]
-						       [RTE_FLOW_NAT64_6TO4] ||
-				    !priv->action_nat64[MLX5DR_TABLE_TYPE_FDB_UNIFIED]
-						       [RTE_FLOW_NAT64_4TO6])
-					goto err_out;
-			}
-		}
-	}
+	if (!flow_hw_should_create_nat64_actions(priv))
+		return rte_flow_error_set(error, EOPNOTSUPP,
+					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+					  "NAT64 action is not supported.");
+
 	return 0;
-err_out:
-	return rte_flow_error_set(error, EOPNOTSUPP, RTE_FLOW_ERROR_TYPE_ACTION,
-				  NULL, "NAT64 action is not supported.");
 }
 
 static int
@@ -7519,8 +7484,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 			action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN;
 			break;
 		case RTE_FLOW_ACTION_TYPE_NAT64:
-			ret = flow_hw_validate_action_nat64(dev, attr, action, mask,
-							    action_flags, error);
+			ret = flow_hw_validate_action_nat64(dev, error);
 			if (ret != 0)
 				return ret;
 			action_flags |= MLX5_FLOW_ACTION_NAT64;
@@ -9892,94 +9856,6 @@ flow_hw_should_create_nat64_actions(struct mlx5_priv *priv)
 	return true;
 }
 
-static void
-flow_hw_destroy_nat64_actions(struct mlx5_priv *priv)
-{
-	uint32_t i;
-
-	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i < MLX5DR_TABLE_TYPE_MAX; i++) {
-		if (priv->action_nat64[i][RTE_FLOW_NAT64_6TO4]) {
-			(void)mlx5dr_action_destroy(priv->action_nat64[i][RTE_FLOW_NAT64_6TO4]);
-			priv->action_nat64[i][RTE_FLOW_NAT64_6TO4] = NULL;
-		}
-		if (priv->action_nat64[i][RTE_FLOW_NAT64_4TO6]) {
-			(void)mlx5dr_action_destroy(priv->action_nat64[i][RTE_FLOW_NAT64_4TO6]);
-			priv->action_nat64[i][RTE_FLOW_NAT64_4TO6] = NULL;
-		}
-	}
-}
-
-static int
-_create_nat64_actions(struct mlx5_priv *priv,
-		      struct mlx5dr_action_nat64_attr *attr,
-		      int type,
-		      struct rte_flow_error *error)
-{
-	const uint32_t flags[MLX5DR_TABLE_TYPE_MAX] = {
-		MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_SHARED,
-		MLX5DR_ACTION_FLAG_HWS_TX | MLX5DR_ACTION_FLAG_SHARED,
-		MLX5DR_ACTION_FLAG_HWS_FDB | MLX5DR_ACTION_FLAG_SHARED,
-		MLX5DR_ACTION_FLAG_HWS_FDB_RX | MLX5DR_ACTION_FLAG_SHARED,
-		MLX5DR_ACTION_FLAG_HWS_FDB_TX | MLX5DR_ACTION_FLAG_SHARED,
-		MLX5DR_ACTION_FLAG_HWS_FDB_UNIFIED | MLX5DR_ACTION_FLAG_SHARED,
-	};
-	struct mlx5dr_action *act;
-
-	attr->flags = (enum mlx5dr_action_nat64_flags)
-		(MLX5DR_ACTION_NAT64_V6_TO_V4 | MLX5DR_ACTION_NAT64_BACKUP_ADDR);
-	act = mlx5dr_action_create_nat64(priv->dr_ctx, attr, flags[type]);
-	if (!act)
-		return rte_flow_error_set(error, rte_errno,
-				RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				"Failed to create v6 to v4 action.");
-	priv->action_nat64[type][RTE_FLOW_NAT64_6TO4] = act;
-	attr->flags = (enum mlx5dr_action_nat64_flags)
-		(MLX5DR_ACTION_NAT64_V4_TO_V6 | MLX5DR_ACTION_NAT64_BACKUP_ADDR);
-	act = mlx5dr_action_create_nat64(priv->dr_ctx, attr, flags[type]);
-	if (!act)
-		return rte_flow_error_set(error, rte_errno,
-				RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				"Failed to create v4 to v6 action.");
-	priv->action_nat64[type][RTE_FLOW_NAT64_4TO6] = act;
-	return 0;
-}
-
-static int
-flow_hw_create_nat64_actions(struct mlx5_priv *priv, struct rte_flow_error *error)
-{
-	struct mlx5dr_action_nat64_attr attr;
-	uint8_t regs[MLX5_FLOW_NAT64_REGS_MAX];
-	uint32_t i, from, to;
-	int rc;
-	bool unified_fdb = is_unified_fdb(priv);
-
-	attr.registers = regs;
-	/* Try to use 3 registers by default. */
-	attr.num_of_registers = MLX5_FLOW_NAT64_REGS_MAX;
-	for (i = 0; i < MLX5_FLOW_NAT64_REGS_MAX; i++) {
-		MLX5_ASSERT(priv->sh->registers.nat64_regs[i] != REG_NON);
-		regs[i] = mlx5_convert_reg_to_field(priv->sh->registers.nat64_regs[i]);
-	}
-	for (i = MLX5DR_TABLE_TYPE_NIC_RX; i <= MLX5DR_TABLE_TYPE_NIC_TX; i++) {
-		rc = _create_nat64_actions(priv, &attr, i, error);
-		if (rc)
-			return rc;
-	}
-	if (priv->sh->config.dv_esw_en) {
-		from = unified_fdb ? MLX5DR_TABLE_TYPE_FDB_RX :
-					MLX5DR_TABLE_TYPE_FDB;
-		to = unified_fdb ? MLX5DR_TABLE_TYPE_FDB_UNIFIED :
-			MLX5DR_TABLE_TYPE_FDB;
-
-		for (i = from; i <= to; i++) {
-			rc = _create_nat64_actions(priv, &attr, i, error);
-			if (rc)
-				return rc;
-		}
-	}
-	return 0;
-}
-
 /**
  * Create an egress pattern template matching on source SQ.
  *
@@ -11879,7 +11755,6 @@ __mlx5_flow_hw_resource_release(struct rte_eth_dev *dev, bool ctx_close)
 	}
 	if (priv->hw_def_miss)
 		mlx5dr_action_destroy(priv->hw_def_miss);
-	flow_hw_destroy_nat64_actions(priv);
 	flow_hw_free_vport_actions(priv);
 	if (priv->acts_ipool) {
 		mlx5_ipool_destroy(priv->acts_ipool);
@@ -12336,14 +12211,6 @@ __flow_hw_configure(struct rte_eth_dev *dev,
 		if (ret < 0)
 			goto err;
 	}
-	if (flow_hw_should_create_nat64_actions(priv)) {
-		if (flow_hw_create_nat64_actions(priv, error))
-			goto err;
-	} else {
-		DRV_LOG(WARNING, "Cannot create NAT64 action on port %u, "
-			"please check the FW version. NAT64 will not be supported.",
-			dev->data->port_id);
-	}
 	if (_queue_attr)
 		mlx5_free(_queue_attr);
 	if (port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE)
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.c b/drivers/net/mlx5/mlx5_hws_global_actions.c
index d8b21a67f1..6520879ae4 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.c
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.c
@@ -5,6 +5,7 @@
 #include "mlx5_hws_global_actions.h"
 
 #include "mlx5.h"
+#include "mlx5_flow.h"
 
 void
 mlx5_hws_global_actions_init(struct mlx5_priv *priv)
@@ -46,6 +47,8 @@ mlx5_hws_global_actions_cleanup(struct mlx5_priv *priv)
 	global_actions_array_cleanup(priv,
 				     &priv->hw_global_actions.send_to_kernel,
 				     "send_to_kernel");
+	global_actions_array_cleanup(priv, &priv->hw_global_actions.nat64_6to4, "nat64_6to4");
+	global_actions_array_cleanup(priv, &priv->hw_global_actions.nat64_4to6, "nat64_4to6");
 
 	rte_spinlock_unlock(&priv->hw_global_actions.lock);
 }
@@ -96,6 +99,18 @@ action_create_send_to_kernel_cb(struct mlx5dr_context *ctx,
 	return mlx5dr_action_create_dest_root(ctx, priority, action_flags);
 }
 
+static struct mlx5dr_action *
+action_create_nat64_cb(struct mlx5dr_context *ctx,
+		       uint32_t action_flags,
+		       void *user_data)
+{
+	struct mlx5dr_action_nat64_attr *attr = user_data;
+
+	/* NAT64 action must always be marked as shared. */
+	return mlx5dr_action_create_nat64(ctx, attr,
+					  action_flags | MLX5DR_ACTION_FLAG_SHARED);
+}
+
 static struct mlx5dr_action *
 global_action_get(struct mlx5_priv *priv,
 		  struct mlx5_hws_global_actions_array *array,
@@ -203,3 +218,33 @@ mlx5_hws_global_action_send_to_kernel_get(struct mlx5_priv *priv,
 				 action_create_send_to_kernel_cb,
 				 (void *)(uintptr_t)priority);
 }
+
+struct mlx5dr_action *
+mlx5_hws_global_action_nat64_get(struct mlx5_priv *priv,
+				 enum mlx5dr_table_type table_type,
+				 enum rte_flow_nat64_type nat64_type)
+{
+	struct mlx5_hws_global_actions_array *array;
+	uint8_t regs[MLX5_FLOW_NAT64_REGS_MAX];
+	struct mlx5dr_action_nat64_attr attr;
+	const char *name;
+
+	for (uint32_t i = 0; i < MLX5_FLOW_NAT64_REGS_MAX; i++)
+		regs[i] = mlx5_convert_reg_to_field(priv->sh->registers.nat64_regs[i]);
+
+	attr.num_of_registers = MLX5_FLOW_NAT64_REGS_MAX;
+	attr.registers = regs;
+
+	if (nat64_type == RTE_FLOW_NAT64_6TO4) {
+		attr.flags = MLX5DR_ACTION_NAT64_V6_TO_V4 | MLX5DR_ACTION_NAT64_BACKUP_ADDR;
+		array = &priv->hw_global_actions.nat64_6to4;
+		name = "nat64_6to4";
+	} else {
+		attr.flags = MLX5DR_ACTION_NAT64_V4_TO_V6 | MLX5DR_ACTION_NAT64_BACKUP_ADDR;
+		array = &priv->hw_global_actions.nat64_4to6;
+		name = "nat64_4to6";
+	}
+
+	return global_action_get(priv, array, name, table_type,
+				 false, action_create_nat64_cb, &attr);
+}
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.h b/drivers/net/mlx5/mlx5_hws_global_actions.h
index 7fbca9fc96..788b5c124a 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.h
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.h
@@ -7,6 +7,7 @@
 
 #include <stdint.h>
 
+#include <rte_flow.h>
 #include <rte_spinlock.h>
 
 #include "hws/mlx5dr.h"
@@ -29,6 +30,8 @@ struct mlx5_hws_global_actions {
 	struct mlx5_hws_global_actions_array pop_vlan;
 	struct mlx5_hws_global_actions_array push_vlan;
 	struct mlx5_hws_global_actions_array send_to_kernel;
+	struct mlx5_hws_global_actions_array nat64_6to4;
+	struct mlx5_hws_global_actions_array nat64_4to6;
 	rte_spinlock_t lock;
 };
 
@@ -56,4 +59,8 @@ struct mlx5dr_action *mlx5_hws_global_action_send_to_kernel_get(struct mlx5_priv
 								 enum mlx5dr_table_type table_type,
 								 uint16_t priority);
 
+struct mlx5dr_action *mlx5_hws_global_action_nat64_get(struct mlx5_priv *priv,
+						       enum mlx5dr_table_type table_type,
+						       enum rte_flow_nat64_type nat64_type);
+
 #endif /* !RTE_PMD_MLX5_HWS_GLOBAL_ACTIONS_H_ */
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 9/9] net/mlx5: lazily allocate HWS default miss action
  2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
                   ` (7 preceding siblings ...)
  2026-02-25 11:59 ` [PATCH 8/9] net/mlx5: lazily allocate HWS NAT64 action Dariusz Sosnowski
@ 2026-02-25 11:59 ` Dariusz Sosnowski
  2026-03-02 11:22 ` [PATCH 0/9] net/mlx5: lazily allocate HWS actions Raslan Darawsheh
  9 siblings, 0 replies; 11+ messages in thread
From: Dariusz Sosnowski @ 2026-02-25 11:59 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad
  Cc: dev, Raslan Darawsheh

HWS default miss action is used to implement PORT_REPRESENTOR
rte_flow action and is used internally for LACP default rules.
It was allocated either on port start or on rte_flow_configure().
This could cause unnecessary FW resource usage
even if user did not use PORT_REPRESENTOR or application
was running on a setup without LACP.

This patch extends global actions internal API,
introduced in previous commits, to allow lazy allocation
of HWS default miss action. It will be allocated on first use
and will be allocated per domain to minimize FW resource usage.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5.h                    |  2 -
 drivers/net/mlx5/mlx5_flow_hw.c            | 76 ++++++++++------------
 drivers/net/mlx5/mlx5_hws_global_actions.c | 23 +++++++
 drivers/net/mlx5/mlx5_hws_global_actions.h |  5 ++
 4 files changed, 61 insertions(+), 45 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 75e61d6b5b..c54266ec26 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2113,8 +2113,6 @@ struct mlx5_priv {
 	struct mlx5dr_action **hw_vport;
 	/* HWS global actions. */
 	struct mlx5_hws_global_actions hw_global_actions;
-	/* HW steering global default miss action. */
-	struct mlx5dr_action *hw_def_miss;
 	/* HW steering create ongoing rte flow table list header. */
 	LIST_HEAD(flow_hw_tbl_ongo, rte_flow_template_table) flow_hw_tbl_ongo;
 	struct mlx5_indexed_pool *acts_ipool; /* Action data indexed pool. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index c74705be8f..041066a94f 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2690,7 +2690,16 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 				DRV_LOG(ERR, "Port representor is not supported in root table.");
 				goto err;
 			}
-			acts->rule_acts[dr_pos].action = priv->hw_def_miss;
+			dr_action = mlx5_hws_global_action_def_miss_get(priv, type, is_root);
+			if (dr_action == NULL) {
+				DRV_LOG(ERR, "port %u failed to allocate port representor action",
+					priv->dev_data->port_id);
+				rte_flow_error_set(&sub_error, ENOMEM,
+						   RTE_FLOW_ERROR_TYPE_STATE, NULL,
+						   "failed to allocate port representor action");
+				goto err;
+			}
+			acts->rule_acts[dr_pos].action = dr_action;
 			break;
 		case RTE_FLOW_ACTION_TYPE_FLAG:
 			dr_action = mlx5_hws_global_action_tag_get(priv, type, is_root);
@@ -3017,7 +3026,16 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 				goto err;
 			break;
 		case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS:
-			acts->rule_acts[dr_pos].action = priv->hw_def_miss;
+			dr_action = mlx5_hws_global_action_def_miss_get(priv, type, is_root);
+			if (dr_action == NULL) {
+				DRV_LOG(ERR, "port %u failed to allocate default miss action",
+					priv->dev_data->port_id);
+				rte_flow_error_set(&sub_error, ENOMEM,
+						   RTE_FLOW_ERROR_TYPE_STATE, NULL,
+						   "failed to allocate default miss action");
+				goto err;
+			}
+			acts->rule_acts[dr_pos].action = dr_action;
 			break;
 		case RTE_FLOW_ACTION_TYPE_NAT64:
 			if (masks->conf &&
@@ -6913,8 +6931,7 @@ flow_hw_validate_action_push_vlan(struct rte_eth_dev *dev,
 }
 
 static int
-flow_hw_validate_action_default_miss(struct rte_eth_dev *dev,
-				     const struct rte_flow_actions_template_attr *attr,
+flow_hw_validate_action_default_miss(const struct rte_flow_actions_template_attr *attr,
 				     uint64_t action_flags,
 				     struct rte_flow_error *error)
 {
@@ -6923,16 +6940,10 @@ flow_hw_validate_action_default_miss(struct rte_eth_dev *dev,
 	 * flows. So this validation can be ignored. It can be kept right now since
 	 * the validation will be done only once.
 	 */
-	struct mlx5_priv *priv = dev->data->dev_private;
-
 	if (!attr->ingress || attr->egress || attr->transfer)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
 					  "DEFAULT MISS is only supported in ingress.");
-	if (!priv->hw_def_miss)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
-					  "DEFAULT MISS action does not exist.");
 	if (action_flags & MLX5_FLOW_FATE_ACTIONS)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
@@ -7493,8 +7504,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 			actions_end = true;
 			break;
 		case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS:
-			ret = flow_hw_validate_action_default_miss(dev, attr,
-								   action_flags, error);
+			ret = flow_hw_validate_action_default_miss(attr, action_flags, error);
 			if (ret < 0)
 				return ret;
 			action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS;
@@ -11753,8 +11763,6 @@ __mlx5_flow_hw_resource_release(struct rte_eth_dev *dev, bool ctx_close)
 		claim_zero(flow_hw_actions_template_destroy(dev, at, NULL));
 		at = temp_at;
 	}
-	if (priv->hw_def_miss)
-		mlx5dr_action_destroy(priv->hw_def_miss);
 	flow_hw_free_vport_actions(priv);
 	if (priv->acts_ipool) {
 		mlx5_ipool_destroy(priv->acts_ipool);
@@ -11952,9 +11960,7 @@ __flow_hw_configure(struct rte_eth_dev *dev,
 	struct rte_flow_queue_attr **_queue_attr = NULL;
 	struct rte_flow_queue_attr ctrl_queue_attr = {0};
 	bool is_proxy = !!(priv->sh->config.dv_esw_en && priv->master);
-	bool unified_fdb = is_unified_fdb(priv);
 	int ret = 0;
-	uint32_t action_flags;
 	bool strict_queue = false;
 
 	error->type = RTE_FLOW_ERROR_TYPE_NONE;
@@ -12129,30 +12135,6 @@ __flow_hw_configure(struct rte_eth_dev *dev,
 		if (ret)
 			goto err;
 	}
-	/*
-	 * DEFAULT_MISS action have different behaviors in different domains.
-	 * In FDB, it will steering the packets to the E-switch manager.
-	 * In NIC Rx root, it will steering the packet to the kernel driver stack.
-	 * An action with all bits set in the flag can be created and the HWS
-	 * layer will translate it properly when being used in different rules.
-	 */
-	action_flags = MLX5DR_ACTION_FLAG_ROOT_RX | MLX5DR_ACTION_FLAG_HWS_RX |
-		       MLX5DR_ACTION_FLAG_ROOT_TX | MLX5DR_ACTION_FLAG_HWS_TX;
-	if (is_proxy) {
-		if (unified_fdb)
-			action_flags |=
-				(MLX5DR_ACTION_FLAG_ROOT_FDB |
-				 MLX5DR_ACTION_FLAG_HWS_FDB_RX |
-				 MLX5DR_ACTION_FLAG_HWS_FDB_TX |
-				 MLX5DR_ACTION_FLAG_HWS_FDB_UNIFIED);
-		else
-			action_flags |=
-				(MLX5DR_ACTION_FLAG_ROOT_FDB |
-				 MLX5DR_ACTION_FLAG_HWS_FDB);
-	}
-	priv->hw_def_miss = mlx5dr_action_create_default_miss(priv->dr_ctx, action_flags);
-	if (!priv->hw_def_miss)
-		goto err;
 	if (is_proxy) {
 		ret = flow_hw_create_vport_actions(priv);
 		if (ret) {
@@ -14599,7 +14581,9 @@ hw_mirror_format_clone(struct rte_eth_dev *dev,
 			const struct mlx5_flow_template_table_cfg *table_cfg,
 			const struct rte_flow_action *actions,
 			struct mlx5dr_action_dest_attr *dest_attr,
-			uint8_t *reformat_buf, struct rte_flow_error *error)
+			uint8_t *reformat_buf,
+			enum mlx5dr_table_type table_type,
+			struct rte_flow_error *error)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	int ret;
@@ -14629,7 +14613,13 @@ hw_mirror_format_clone(struct rte_eth_dev *dev,
 				return ret;
 			break;
 		case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR:
-			dest_attr->dest = priv->hw_def_miss;
+			dest_attr->dest = mlx5_hws_global_action_def_miss_get(priv,
+									      table_type,
+									      false);
+			if (dest_attr->dest == NULL)
+				return rte_flow_error_set(error, ENOMEM,
+						RTE_FLOW_ERROR_TYPE_STATE, NULL,
+						"failed to allocate port representor action");
 			break;
 		case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
 			decap_seen = true;
@@ -14711,7 +14701,7 @@ mlx5_hw_create_mirror(struct rte_eth_dev *dev,
 		}
 		ret = hw_mirror_format_clone(dev, &mirror->clone[i], table_cfg,
 					     clone_actions, &mirror_attr[i],
-					     reformat_buf[i], error);
+					     reformat_buf[i], table_type, error);
 
 		if (ret)
 			goto error;
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.c b/drivers/net/mlx5/mlx5_hws_global_actions.c
index 6520879ae4..58b9c44752 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.c
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.c
@@ -49,6 +49,7 @@ mlx5_hws_global_actions_cleanup(struct mlx5_priv *priv)
 				     "send_to_kernel");
 	global_actions_array_cleanup(priv, &priv->hw_global_actions.nat64_6to4, "nat64_6to4");
 	global_actions_array_cleanup(priv, &priv->hw_global_actions.nat64_4to6, "nat64_4to6");
+	global_actions_array_cleanup(priv, &priv->hw_global_actions.def_miss, "def_miss");
 
 	rte_spinlock_unlock(&priv->hw_global_actions.lock);
 }
@@ -111,6 +112,14 @@ action_create_nat64_cb(struct mlx5dr_context *ctx,
 					  action_flags | MLX5DR_ACTION_FLAG_SHARED);
 }
 
+static struct mlx5dr_action *
+action_create_def_miss_cb(struct mlx5dr_context *ctx,
+			  uint32_t action_flags,
+			  void *user_data __rte_unused)
+{
+	return mlx5dr_action_create_default_miss(ctx, action_flags);
+}
+
 static struct mlx5dr_action *
 global_action_get(struct mlx5_priv *priv,
 		  struct mlx5_hws_global_actions_array *array,
@@ -248,3 +257,17 @@ mlx5_hws_global_action_nat64_get(struct mlx5_priv *priv,
 	return global_action_get(priv, array, name, table_type,
 				 false, action_create_nat64_cb, &attr);
 }
+
+struct mlx5dr_action *
+mlx5_hws_global_action_def_miss_get(struct mlx5_priv *priv,
+				    enum mlx5dr_table_type table_type,
+				    bool is_root)
+{
+	return global_action_get(priv,
+				 &priv->hw_global_actions.def_miss,
+				 "def_miss",
+				 table_type,
+				 is_root,
+				 action_create_def_miss_cb,
+				 NULL);
+}
diff --git a/drivers/net/mlx5/mlx5_hws_global_actions.h b/drivers/net/mlx5/mlx5_hws_global_actions.h
index 788b5c124a..16dc8d7002 100644
--- a/drivers/net/mlx5/mlx5_hws_global_actions.h
+++ b/drivers/net/mlx5/mlx5_hws_global_actions.h
@@ -32,6 +32,7 @@ struct mlx5_hws_global_actions {
 	struct mlx5_hws_global_actions_array send_to_kernel;
 	struct mlx5_hws_global_actions_array nat64_6to4;
 	struct mlx5_hws_global_actions_array nat64_4to6;
+	struct mlx5_hws_global_actions_array def_miss;
 	rte_spinlock_t lock;
 };
 
@@ -63,4 +64,8 @@ struct mlx5dr_action *mlx5_hws_global_action_nat64_get(struct mlx5_priv *priv,
 						       enum mlx5dr_table_type table_type,
 						       enum rte_flow_nat64_type nat64_type);
 
+struct mlx5dr_action *mlx5_hws_global_action_def_miss_get(struct mlx5_priv *priv,
+							   enum mlx5dr_table_type table_type,
+							   bool is_root);
+
 #endif /* !RTE_PMD_MLX5_HWS_GLOBAL_ACTIONS_H_ */
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/9] net/mlx5: lazily allocate HWS actions
  2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
                   ` (8 preceding siblings ...)
  2026-02-25 11:59 ` [PATCH 9/9] net/mlx5: lazily allocate HWS default miss action Dariusz Sosnowski
@ 2026-03-02 11:22 ` Raslan Darawsheh
  9 siblings, 0 replies; 11+ messages in thread
From: Raslan Darawsheh @ 2026-03-02 11:22 UTC (permalink / raw)
  To: Dariusz Sosnowski, Viacheslav Ovsiienko, Bing Zhao, Ori Kam,
	Suanming Mou, Matan Azrad
  Cc: dev

Hi,


On 25/02/2026 1:59 PM, Dariusz Sosnowski wrote:
> In mlx5 PMD, HWS flow engine is configured either:
> 
> - implicitly on port start or
> - explicitly on rte_flow_configure().
> 
> As part of flow engine configuration, PMD allocates a set of global
> HWS action objects. These are used to implement flow actions such as:
> 
> - DROP
> - MARK and FLAG (HWS tag action)
> - OF_PUSH_VLAN
> - OF_POP_VLAN
> - SEND_TO_KERNEL
> - NAT64
> - PORT_REPRESENTOR (HWS default miss action).
> 
> These actions can be allocated once, for each flow domain,
> and parametrized on runtime.
> Allocation of these actions requires allocation of STCs,
> HW objects used to set up HW actions.
> These STCs are allocated in bulk to reduce amount of syscalls needed.
> 
> In case of global actions listed above, these STCs were allocated always
> even if none of the related flow actions were used by the application.
> This caused unnecessary system memory usage which could reach 4 MB per port.
> On systems with multiple VFs/SFs, memory usage could reach couple of GBs.
> 
> This patchset addresses that by introducing lazy allocation of these actions:
> 
> - Patch 1 - Redefines mlx5dr_action_mh_pattern to use rte_be64_t instead of __be64.
>    This is to prevent compilation issues when mlx5dr.h is included in new files.
> - Patch 2 - Add helpers for translating HWS flow table type to HWS action flags
>    used on allocation to simplify logic added in follow up patches.
> - Patch 3 - Introduces dedicated internal interface for lazily allocating
>    HWS actions. Drop action is handled first.
> - Patch 4-9 - Each patch adjusts one action type to use lazy allocation.
> 
> Dariusz Sosnowski (9):
>    net/mlx5: use DPDK be64 type in modify header pattern
>    net/mlx5/hws: add table type to action flags conversion
>    net/mlx5: lazily allocate drop HWS action
>    net/mlx5: lazily allocate tag HWS action
>    net/mlx5: lazily allocate HWS pop VLAN action
>    net/mlx5: lazily allocate HWS push VLAN action
>    net/mlx5: lazily allocate HWS send to kernel action
>    net/mlx5: lazily allocate HWS NAT64 action
>    net/mlx5: lazily allocate HWS default miss action
> 
>   drivers/net/mlx5/hws/mlx5dr.h              |  19 +-
>   drivers/net/mlx5/hws/mlx5dr_action.c       |  26 +-
>   drivers/net/mlx5/hws/mlx5dr_pat_arg.c      |  20 +-
>   drivers/net/mlx5/hws/mlx5dr_pat_arg.h      |   8 +-
>   drivers/net/mlx5/hws/mlx5dr_table.c        |  75 +++
>   drivers/net/mlx5/meson.build               |   1 +
>   drivers/net/mlx5/mlx5.h                    |  21 +-
>   drivers/net/mlx5/mlx5_flow.h               |   1 +
>   drivers/net/mlx5/mlx5_flow_dv.c            |   2 +-
>   drivers/net/mlx5/mlx5_flow_hw.c            | 557 ++++++---------------
>   drivers/net/mlx5/mlx5_hws_global_actions.c | 273 ++++++++++
>   drivers/net/mlx5/mlx5_hws_global_actions.h |  71 +++
>   12 files changed, 611 insertions(+), 463 deletions(-)
>   create mode 100644 drivers/net/mlx5/mlx5_hws_global_actions.c
>   create mode 100644 drivers/net/mlx5/mlx5_hws_global_actions.h
> 
> --
> 2.47.3
> 


Series applied to next-net-mlx,

Kindest regards
Raslan Darawsheh


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2026-03-02 11:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-25 11:59 [PATCH 0/9] net/mlx5: lazily allocate HWS actions Dariusz Sosnowski
2026-02-25 11:59 ` [PATCH 1/9] net/mlx5: use DPDK be64 type in modify header pattern Dariusz Sosnowski
2026-02-25 11:59 ` [PATCH 2/9] net/mlx5/hws: add table type to action flags conversion Dariusz Sosnowski
2026-02-25 11:59 ` [PATCH 3/9] net/mlx5: lazily allocate drop HWS action Dariusz Sosnowski
2026-02-25 11:59 ` [PATCH 4/9] net/mlx5: lazily allocate tag " Dariusz Sosnowski
2026-02-25 11:59 ` [PATCH 5/9] net/mlx5: lazily allocate HWS pop VLAN action Dariusz Sosnowski
2026-02-25 11:59 ` [PATCH 6/9] net/mlx5: lazily allocate HWS push " Dariusz Sosnowski
2026-02-25 11:59 ` [PATCH 7/9] net/mlx5: lazily allocate HWS send to kernel action Dariusz Sosnowski
2026-02-25 11:59 ` [PATCH 8/9] net/mlx5: lazily allocate HWS NAT64 action Dariusz Sosnowski
2026-02-25 11:59 ` [PATCH 9/9] net/mlx5: lazily allocate HWS default miss action Dariusz Sosnowski
2026-03-02 11:22 ` [PATCH 0/9] net/mlx5: lazily allocate HWS actions Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox