From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org, Yevgeny Kliteynik <kliteyn@nvidia.com>,
Alex Vesker <valex@nvidia.com>,
Saeed Mahameed <saeedm@nvidia.com>
Subject: [net 03/19] net/mlx5: DR, Fix slab-out-of-bounds in mlx5_cmd_dr_create_fte
Date: Wed, 23 Feb 2022 09:04:14 -0800 [thread overview]
Message-ID: <20220223170430.295595-4-saeed@kernel.org> (raw)
In-Reply-To: <20220223170430.295595-1-saeed@kernel.org>
From: Yevgeny Kliteynik <kliteyn@nvidia.com>
When adding a rule with 32 destinations, we hit the following out-of-band
access issue:
BUG: KASAN: slab-out-of-bounds in mlx5_cmd_dr_create_fte+0x18ee/0x1e70
This patch fixes the issue by both increasing the allocated buffers to
accommodate for the needed actions and by checking the number of actions
to prevent this issue when a rule with too many actions is provided.
Fixes: 1ffd498901c1 ("net/mlx5: DR, Increase supported num of actions to 32")
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
.../mellanox/mlx5/core/steering/fs_dr.c | 33 +++++++++++++++----
1 file changed, 26 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
index a476da2424f8..3f311462bedf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
@@ -233,7 +233,11 @@ static bool contain_vport_reformat_action(struct mlx5_flow_rule *dst)
dst->dest_attr.vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID;
}
-#define MLX5_FLOW_CONTEXT_ACTION_MAX 32
+/* We want to support a rule with 32 destinations, which means we need to
+ * account for 32 destinations plus usually a counter plus one more action
+ * for a multi-destination flow table.
+ */
+#define MLX5_FLOW_CONTEXT_ACTION_MAX 34
static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
struct mlx5_flow_table *ft,
struct mlx5_flow_group *group,
@@ -403,9 +407,9 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
enum mlx5_flow_destination_type type = dst->dest_attr.type;
u32 id;
- if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
- num_term_actions >= MLX5_FLOW_CONTEXT_ACTION_MAX) {
- err = -ENOSPC;
+ if (fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
+ num_term_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
+ err = -EOPNOTSUPP;
goto free_actions;
}
@@ -478,8 +482,9 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
MLX5_FLOW_DESTINATION_TYPE_COUNTER)
continue;
- if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
- err = -ENOSPC;
+ if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
+ fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
+ err = -EOPNOTSUPP;
goto free_actions;
}
@@ -499,14 +504,28 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
params.match_sz = match_sz;
params.match_buf = (u64 *)fte->val;
if (num_term_actions == 1) {
- if (term_actions->reformat)
+ if (term_actions->reformat) {
+ if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
+ err = -EOPNOTSUPP;
+ goto free_actions;
+ }
actions[num_actions++] = term_actions->reformat;
+ }
+ if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
+ err = -EOPNOTSUPP;
+ goto free_actions;
+ }
actions[num_actions++] = term_actions->dest;
} else if (num_term_actions > 1) {
bool ignore_flow_level =
!!(fte->action.flags & FLOW_ACT_IGNORE_FLOW_LEVEL);
+ if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
+ fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
+ err = -EOPNOTSUPP;
+ goto free_actions;
+ }
tmp_action = mlx5dr_action_create_mult_dest_tbl(domain,
term_actions,
num_term_actions,
--
2.35.1
next prev parent reply other threads:[~2022-02-23 17:04 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-23 17:04 [pull request][net 00/19] mlx5 fixes 2022-02-23 Saeed Mahameed
2022-02-23 17:04 ` [net 01/19] net/mlx5: Update the list of the PCI supported devices Saeed Mahameed
2022-02-23 17:04 ` [net 02/19] net/mlx5: DR, Cache STE shadow memory Saeed Mahameed
2022-02-23 17:04 ` Saeed Mahameed [this message]
2022-02-23 17:04 ` [net 04/19] net/mlx5: DR, Don't allow match on IP w/o matching on full ethertype/ip_version Saeed Mahameed
2022-02-23 23:26 ` Jakub Kicinski
2022-02-23 23:35 ` Yevgeny Kliteynik
2022-02-23 17:04 ` [net 05/19] net/mlx5: DR, Fix the threshold that defines when pool sync is initiated Saeed Mahameed
2022-02-23 17:04 ` [net 06/19] net/mlx5: Update log_max_qp value to be 17 at most Saeed Mahameed
2022-02-23 17:04 ` [net 07/19] net/mlx5: Fix wrong limitation of metadata match on ecpf Saeed Mahameed
2022-02-23 17:04 ` [net 08/19] net/mlx5: Fix tc max supported prio for nic mode Saeed Mahameed
2022-02-23 17:04 ` [net 09/19] net/mlx5: Fix possible deadlock on rule deletion Saeed Mahameed
2022-02-23 17:04 ` [net 10/19] net/mlx5e: Fix wrong return value on ioctl EEPROM query failure Saeed Mahameed
2022-02-23 17:04 ` [net 11/19] net/mlx5e: kTLS, Use CHECKSUM_UNNECESSARY for device-offloaded packets Saeed Mahameed
2022-02-23 17:04 ` [net 12/19] net/mlx5e: TC, Reject rules with drop and modify hdr action Saeed Mahameed
2022-02-23 17:04 ` [net 13/19] net/mlx5e: TC, Reject rules with forward and drop actions Saeed Mahameed
2022-02-23 17:04 ` [net 14/19] net/mlx5e: TC, Skip redundant ct clear actions Saeed Mahameed
2022-02-23 17:04 ` [net 15/19] net/mlx5e: Add feature check for set fec counters Saeed Mahameed
2022-02-23 17:04 ` [net 16/19] net/mlx5e: Fix MPLSoUDP encap to use MPLS action information Saeed Mahameed
2022-02-23 17:04 ` [net 17/19] net/mlx5e: MPLSoUDP decap, fix check for unsupported matches Saeed Mahameed
2022-02-23 17:04 ` [net 18/19] net/mlx5e: Add missing increment of count Saeed Mahameed
2022-02-23 17:04 ` [net 19/19] net/mlx5e: Fix VF min/max rate parameters interchange mistake Saeed Mahameed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220223170430.295595-4-saeed@kernel.org \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=kliteyn@nvidia.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=saeedm@nvidia.com \
--cc=valex@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).