From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Eric Dumazet <edumazet@google.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>,
netdev@vger.kernel.org, Tariq Toukan <tariqt@nvidia.com>,
Yevgeny Kliteynik <kliteyn@nvidia.com>,
Paul Blakey <paulb@nvidia.com>
Subject: [net V2 7/9] net/mlx5e: TC, CT: Offload ct clear only once
Date: Wed, 5 Jul 2023 10:57:55 -0700 [thread overview]
Message-ID: <20230705175757.284614-8-saeed@kernel.org> (raw)
In-Reply-To: <20230705175757.284614-1-saeed@kernel.org>
From: Yevgeny Kliteynik <kliteyn@nvidia.com>
Non-clear CT action causes a flow rule split, while CT clear action
doesn't and is just a header-rewrite to the current flow rule.
But ct offload is done in post_parse and is per ct action instance,
so ct clear offload is parsed multiple times, while its deleted once.
Fix this by post_parsing the ct action only once per flow attribute
(which is per flow rule) by using a offloaded ct_attr flag.
Fixes: 08fe94ec5f77 ("net/mlx5e: TC, Remove special handling of CT action")
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c | 14 +++++++++++---
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h | 1 +
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
index a254e728ac95..fadfa8b50beb 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
@@ -1545,7 +1545,8 @@ mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv,
attr->ct_attr.ct_action |= act->ct.action; /* So we can have clear + ct */
attr->ct_attr.zone = act->ct.zone;
- attr->ct_attr.nf_ft = act->ct.flow_table;
+ if (!(act->ct.action & TCA_CT_ACT_CLEAR))
+ attr->ct_attr.nf_ft = act->ct.flow_table;
attr->ct_attr.act_miss_cookie = act->miss_cookie;
return 0;
@@ -1990,6 +1991,9 @@ mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *priv, struct mlx5_flow_attr *att
if (!priv)
return -EOPNOTSUPP;
+ if (attr->ct_attr.offloaded)
+ return 0;
+
if (attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR) {
err = mlx5_tc_ct_entry_set_registers(priv, &attr->parse_attr->mod_hdr_acts,
0, 0, 0, 0);
@@ -1999,11 +2003,15 @@ mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *priv, struct mlx5_flow_attr *att
attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
}
- if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */
+ if (!attr->ct_attr.nf_ft) { /* means only ct clear action, and not ct_clear,ct() */
+ attr->ct_attr.offloaded = true;
return 0;
+ }
mutex_lock(&priv->control_lock);
err = __mlx5_tc_ct_flow_offload(priv, attr);
+ if (!err)
+ attr->ct_attr.offloaded = true;
mutex_unlock(&priv->control_lock);
return err;
@@ -2021,7 +2029,7 @@ void
mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *priv,
struct mlx5_flow_attr *attr)
{
- if (!attr->ct_attr.ft) /* no ct action, return */
+ if (!attr->ct_attr.offloaded) /* no ct action, return */
return;
if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */
return;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
index 8e9316fa46d4..b66c5f98067f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.h
@@ -29,6 +29,7 @@ struct mlx5_ct_attr {
u32 ct_labels_id;
u32 act_miss_mapping;
u64 act_miss_cookie;
+ bool offloaded;
struct mlx5_ct_ft *ft;
};
--
2.41.0
next prev parent reply other threads:[~2023-07-05 17:58 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-05 17:57 [pull request][net V2 0/9] mlx5 fixes 2023-07-05 Saeed Mahameed
2023-07-05 17:57 ` [net V2 1/9] net/mlx5e: fix double free in mlx5e_destroy_flow_table Saeed Mahameed
2023-07-05 19:23 ` Michal Kubiak
2023-07-07 2:20 ` patchwork-bot+netdevbpf
2023-07-05 17:57 ` [net V2 2/9] net/mlx5e: fix memory leak in mlx5e_fs_tt_redirect_any_create Saeed Mahameed
2023-07-05 19:29 ` Michal Kubiak
2023-07-05 17:57 ` [net V2 3/9] net/mlx5e: fix memory leak in mlx5e_ptp_open Saeed Mahameed
2023-07-05 19:31 ` Michal Kubiak
2023-07-05 17:57 ` [net V2 4/9] net/mlx5e: RX, Fix flush and close release flow of regular rq for legacy rq Saeed Mahameed
2023-07-05 17:57 ` [net V2 5/9] net/mlx5: Register a unique thermal zone per device Saeed Mahameed
2023-07-05 19:48 ` Michal Kubiak
2023-07-06 3:20 ` Jakub Kicinski
2023-07-06 6:09 ` Saeed Mahameed
2023-07-06 15:42 ` Jakub Kicinski
2023-07-05 17:57 ` [net V2 6/9] net/mlx5e: Check for NOT_READY flag state after locking Saeed Mahameed
2023-07-05 17:57 ` Saeed Mahameed [this message]
2023-07-05 17:57 ` [net V2 8/9] net/mlx5: Query hca_cap_2 only when supported Saeed Mahameed
2023-07-05 19:51 ` Michal Kubiak
2023-07-05 17:57 ` [net V2 9/9] net/mlx5e: RX, Fix page_pool page fragment tracking for XDP Saeed Mahameed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230705175757.284614-8-saeed@kernel.org \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kliteyn@nvidia.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=paulb@nvidia.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).