From: Mark Bloch <mbloch@nvidia.com>
To: Prathamesh Deshpande <prathameshdeshpande7@gmail.com>,
Saeed Mahameed <saeedm@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>
Cc: shayd@nvidia.com, andrew+netdev@lunn.ch, davem@davemloft.net,
edumazet@google.com, kuba@kernel.org, pabeni@redhat.com,
netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH net v1] net/mlx5: Fix flow steering alloc unwind
Date: Sun, 3 May 2026 20:38:37 +0300 [thread overview]
Message-ID: <a77f42e2-2565-4bae-8d24-e69d931b7949@nvidia.com> (raw)
In-Reply-To: <20260501232031.41688-1-prathameshdeshpande7@gmail.com>
On 02/05/2026 2:20, Prathamesh Deshpande wrote:
> mlx5_fs_core_alloc() uses mlx5_fs_core_free() for its common error path,
> but mlx5_fs_core_free() dereferences dev->priv.steering.
>
> If mlx5_ft_pool_init() fails, or if allocating the steering object fails,
> dev->priv.steering has not been assigned yet. The error path can then
> dereference NULL while unwinding the original failure.
>
> Split the unwind paths so only resources that were successfully
> initialized are released.
>
> Fixes: b33886971dbc ("net/mlx5: Initialize flow steering during driver probe")
> Signed-off-by: Prathamesh Deshpande <prathameshdeshpande7@gmail.com>
> ---
> .../net/ethernet/mellanox/mlx5/core/fs_core.c | 16 +++++++++++-----
> 1 file changed, 11 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
> index 61a6ba1e49dd..e1662dcedbf4 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
> @@ -3984,12 +3984,12 @@ int mlx5_fs_core_alloc(struct mlx5_core_dev *dev)
>
> err = mlx5_ft_pool_init(dev);
> if (err)
> - goto err;
> + goto err_fc_stats;
>
> steering = kzalloc_obj(*steering);
> if (!steering) {
> err = -ENOMEM;
> - goto err;
> + goto err_ft_pool;
> }
>
> steering->dev = dev;
> @@ -4011,13 +4011,19 @@ int mlx5_fs_core_alloc(struct mlx5_core_dev *dev)
> 0, NULL);
> if (!steering->ftes_cache || !steering->fgs_cache) {
> err = -ENOMEM;
> - goto err;
> + goto err_fs_core;
> }
>
> return 0;
>
> -err:
> - mlx5_fs_core_free(dev);
> +err_fs_core:
> + kmem_cache_destroy(steering->ftes_cache);
> + kmem_cache_destroy(steering->fgs_cache);
> + kfree(steering);
> +err_ft_pool:
> + mlx5_ft_pool_destroy(dev);
> +err_fc_stats:
> + mlx5_cleanup_fc_stats(dev);
> return err;
> }
>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Thanks for the fix.
prev parent reply other threads:[~2026-05-03 17:38 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-01 23:20 [PATCH net v1] net/mlx5: Fix flow steering alloc unwind Prathamesh Deshpande
2026-05-03 17:38 ` Mark Bloch [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a77f42e2-2565-4bae-8d24-e69d931b7949@nvidia.com \
--to=mbloch@nvidia.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=prathameshdeshpande7@gmail.com \
--cc=saeedm@nvidia.com \
--cc=shayd@nvidia.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox