From: Mark Bloch <mbloch@nvidia.com>
To: Tariq Toukan <tariqt@nvidia.com>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>
Cc: Leon Romanovsky <leon@kernel.org>, Jason Gunthorpe <jgg@ziepe.ca>,
Saeed Mahameed <saeedm@nvidia.com>, Shay Drory <shayd@nvidia.com>,
Or Har-Toov <ohartoov@nvidia.com>,
Edward Srouji <edwards@nvidia.com>,
Maher Sanalla <msanalla@nvidia.com>,
Simon Horman <horms@kernel.org>,
Gerd Bayer <gbayer@linux.ibm.com>,
Moshe Shemesh <moshe@nvidia.com>, Kees Cook <kees@kernel.org>,
Patrisious Haddad <phaddad@nvidia.com>,
Parav Pandit <parav@nvidia.com>,
Carolina Jubran <cjubran@nvidia.com>,
Cosmin Ratiu <cratiu@nvidia.com>,
linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org,
netdev@vger.kernel.org, Gal Pressman <gal@nvidia.com>,
Dragos Tatulea <dtatulea@nvidia.com>
Subject: Re: [PATCH net-next V2 5/7] net/mlx5: E-Switch, unwind only newly loaded representor types
Date: Sat, 2 May 2026 23:06:59 +0300 [thread overview]
Message-ID: <48dd34af-a9c4-435b-a22f-f1061a3616b2@nvidia.com> (raw)
In-Reply-To: <20260501041633.231662-6-tariqt@nvidia.com>
On 01/05/2026 7:16, Tariq Toukan wrote:
> From: Mark Bloch <mbloch@nvidia.com>
>
> __esw_offloads_load_rep() may return success without invoking the
> representor load callback when the representor type is already loaded.
>
> On a later load failure, mlx5_esw_offloads_rep_load() unconditionally
> unloaded all previously iterated representor types. This could unload
> representor types that were already loaded before this load attempt.
>
> Track which representor types were actually loaded by the current call and
> unwind only those on error. Also restore the representor state back to
> REP_REGISTERED when the load callback itself fails.
>
> Signed-off-by: Mark Bloch <mbloch@nvidia.com>
> Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
> ---
sashiko.dev says:
"
Note: Due to recitation restrictions, the standard inline patch quoting format
has been altered to a free-form summary.
commit ca89c8e618c910a6810fe7127853404ce151ae6b
Author: Mark Bloch <mbloch@nvidia.com>
net/mlx5: E-Switch, unwind only newly loaded representor types
This commit updates the representor load error handling to track which
representor types were actually loaded by the current call and unwind only
those on error. It also restores the state to REP_REGISTERED if the load
callback fails.
In the modified __esw_offloads_load_rep() function, the representor state
is set to REP_LOADED via atomic_cmpxchg() before the actual load() callback
is invoked. Does this sequence expose a race condition with lockless readers?
If lockless readers, such as mlx5_eswitch_get_proto_dev() or
mlx5_esw_ipsec_restore_dest_uplink(), check the state and observe REP_LOADED
while the load callback is still executing, can they access uninitialized
private data resulting in a NULL pointer dereference?
Additionally, if the load() callback subsequently fails and frees the
allocated private data, could a lockless reader that already observed the
"
The REP_LOADED before load() ordering is pre-existing and orthogonal
to this unwind fix. I’ll look into it and if needed address it in
a separate patch.
Mark
> .../mellanox/mlx5/core/eswitch_offloads.c | 38 ++++++++++++++-----
> 1 file changed, 29 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
> index d4ac07c995b9..8f656253981b 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
> @@ -2786,13 +2786,28 @@ void esw_offloads_cleanup(struct mlx5_eswitch *esw)
> }
>
> static int __esw_offloads_load_rep(struct mlx5_eswitch *esw,
> - struct mlx5_eswitch_rep *rep, u8 rep_type)
> + struct mlx5_eswitch_rep *rep,
> + u8 rep_type, bool *newly_loaded)
> {
> + int err;
> +
> mlx5_esw_assert_reps_locked(esw);
>
> + if (newly_loaded)
> + *newly_loaded = false;
> +
> if (atomic_cmpxchg(&rep->rep_data[rep_type].state,
> - REP_REGISTERED, REP_LOADED) == REP_REGISTERED)
> - return esw->offloads.rep_ops[rep_type]->load(esw->dev, rep);
> + REP_REGISTERED, REP_LOADED) != REP_REGISTERED)
> + return 0;
> +
> + err = esw->offloads.rep_ops[rep_type]->load(esw->dev, rep);
> + if (err) {
> + atomic_set(&rep->rep_data[rep_type].state, REP_REGISTERED);
> + return err;
> + }
> +
> + if (newly_loaded)
> + *newly_loaded = true;
>
> return 0;
> }
> @@ -2822,22 +2837,27 @@ static void __unload_reps_all_vport(struct mlx5_eswitch *esw, u8 rep_type)
> static int mlx5_esw_offloads_rep_load(struct mlx5_eswitch *esw, u16 vport_num)
> {
> struct mlx5_eswitch_rep *rep;
> + unsigned long loaded = 0;
> + bool newly_loaded;
> int rep_type;
> int err;
>
> rep = mlx5_eswitch_get_rep(esw, vport_num);
> for (rep_type = 0; rep_type < NUM_REP_TYPES; rep_type++) {
> - err = __esw_offloads_load_rep(esw, rep, rep_type);
> + err = __esw_offloads_load_rep(esw, rep, rep_type,
> + &newly_loaded);
> if (err)
> goto err_reps;
> + if (newly_loaded)
> + loaded |= BIT(rep_type);
> }
>
> return 0;
>
> err_reps:
> - atomic_set(&rep->rep_data[rep_type].state, REP_REGISTERED);
> - for (--rep_type; rep_type >= 0; rep_type--)
> - __esw_offloads_unload_rep(esw, rep, rep_type);
> + while (--rep_type >= 0)
> + if (test_bit(rep_type, &loaded))
> + __esw_offloads_unload_rep(esw, rep, rep_type);
> return err;
> }
>
> @@ -3591,13 +3611,13 @@ int mlx5_eswitch_reload_ib_reps(struct mlx5_eswitch *esw)
> if (atomic_read(&rep->rep_data[REP_ETH].state) != REP_LOADED)
> return 0;
>
> - ret = __esw_offloads_load_rep(esw, rep, REP_IB);
> + ret = __esw_offloads_load_rep(esw, rep, REP_IB, NULL);
> if (ret)
> return ret;
>
> mlx5_esw_for_each_rep(esw, i, rep) {
> if (atomic_read(&rep->rep_data[REP_ETH].state) == REP_LOADED)
> - __esw_offloads_load_rep(esw, rep, REP_IB);
> + __esw_offloads_load_rep(esw, rep, REP_IB, NULL);
> }
>
> return 0;
next prev parent reply other threads:[~2026-05-02 20:07 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-01 4:16 [PATCH net-next V2 0/7] net/mlx5: Improve representor lifecycle and allow switchdev by default Tariq Toukan
2026-05-01 4:16 ` [PATCH net-next V2 1/7] net/mlx5: Lag: refactor representor reload handling Tariq Toukan
2026-05-01 4:16 ` [PATCH net-next V2 2/7] net/mlx5: E-Switch, add representor lifecycle lock Tariq Toukan
2026-05-01 4:16 ` [PATCH net-next V2 3/7] net/mlx5: Lag, avoid LAG and representor lock cycles Tariq Toukan
2026-05-02 20:04 ` Mark Bloch
2026-05-01 4:16 ` [PATCH net-next V2 4/7] net/mlx5: E-Switch, serialize representor lifecycle Tariq Toukan
2026-05-02 20:05 ` Mark Bloch
2026-05-03 1:42 ` Jakub Kicinski
2026-05-03 8:18 ` Mark Bloch
2026-05-01 4:16 ` [PATCH net-next V2 5/7] net/mlx5: E-Switch, unwind only newly loaded representor types Tariq Toukan
2026-05-02 20:06 ` Mark Bloch [this message]
2026-05-01 4:16 ` [PATCH net-next V2 6/7] net/mlx5: E-switch, load reps via work queue after registration Tariq Toukan
2026-05-02 20:07 ` Mark Bloch
2026-05-03 1:42 ` Jakub Kicinski
2026-05-03 8:01 ` Mark Bloch
2026-05-01 4:16 ` [PATCH net-next V2 7/7] net/mlx5: Add profile to auto-enable switchdev mode at device init Tariq Toukan
2026-05-02 20:08 ` Mark Bloch
2026-05-03 1:41 ` Jakub Kicinski
2026-05-03 7:51 ` Mark Bloch
2026-05-05 1:21 ` Jakub Kicinski
2026-05-05 2:00 ` Mark Bloch
2026-05-05 2:19 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=48dd34af-a9c4-435b-a22f-f1061a3616b2@nvidia.com \
--to=mbloch@nvidia.com \
--cc=andrew+netdev@lunn.ch \
--cc=cjubran@nvidia.com \
--cc=cratiu@nvidia.com \
--cc=davem@davemloft.net \
--cc=dtatulea@nvidia.com \
--cc=edumazet@google.com \
--cc=edwards@nvidia.com \
--cc=gal@nvidia.com \
--cc=gbayer@linux.ibm.com \
--cc=horms@kernel.org \
--cc=jgg@ziepe.ca \
--cc=kees@kernel.org \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=moshe@nvidia.com \
--cc=msanalla@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=ohartoov@nvidia.com \
--cc=pabeni@redhat.com \
--cc=parav@nvidia.com \
--cc=phaddad@nvidia.com \
--cc=saeedm@nvidia.com \
--cc=shayd@nvidia.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox