From: Jakub Kicinski <kuba@kernel.org>
To: tariqt@nvidia.com
Cc: Jakub Kicinski <kuba@kernel.org>,
edumazet@google.com, pabeni@redhat.com, andrew+netdev@lunn.ch,
davem@davemloft.net, donald.hunter@gmail.com, horms@kernel.org,
jiri@resnulli.us, corbet@lwn.net, skhan@linuxfoundation.org,
saeedm@nvidia.com, leon@kernel.org, mbloch@nvidia.com,
shuah@kernel.org, chuck.lever@oracle.com, matttbe@kernel.org,
cjubran@nvidia.com, cratiu@nvidia.com, dtatulea@nvidia.com,
jacob.e.keller@intel.com, shshitrit@nvidia.com,
daniel.zahka@gmail.com, parav@nvidia.com,
ajayachandra@nvidia.com, kees@kernel.org, shayd@nvidia.com,
danielj@nvidia.com, moshe@nvidia.com, willemb@google.com,
dw@davidwei.uk, petrm@nvidia.com, sdf@fomichev.me,
daniel@iogearbox.net, joe@dama.to, razor@blackwall.org,
vadim.fedorenko@linux.dev, mst@redhat.com, antonio@openvpn.net,
allison.henderson@oracle.com, minhquangbui99@gmail.com,
noren@nvidia.com, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
linux-rdma@vger.kernel.org, linux-kselftest@vger.kernel.org,
gal@nvidia.com, jiri@nvidia.com
Subject: Re: [PATCH net-next V9 02/14] devlink: Add helpers to lock nested-in instances
Date: Mon, 30 Mar 2026 19:08:06 -0700 [thread overview]
Message-ID: <20260331020807.3524811-1-kuba@kernel.org> (raw)
In-Reply-To: <20260326065949.44058-3-tariqt@nvidia.com>
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
---
devlink: Add helpers to lock nested-in instances
Upcoming code will need to obtain a reference to locked nested-in
devlink instances. Add helpers to lock, obtain an already locked
reference and unlock/unref the nested-in instance.
> diff --git a/net/devlink/core.c b/net/devlink/core.c
> index eeb6a71f5f56e..db11248df7120 100644
> --- a/net/devlink/core.c
> +++ b/net/devlink/core.c
[ ... ]
> +/* Returns the nested in devlink object and validates its lock is held. */
> +struct devlink *devlink_nested_in_get_locked(struct devlink_rel *rel)
> +{
> + struct devlink *devlink;
> + unsigned long index;
> +
> + if (!rel)
> + return NULL;
> + index = rel->nested_in.devlink_index;
> + devlink = xa_find(&devlinks, &index, index, DEVLINK_REGISTERED);
> + if (devlink)
> + devl_assert_locked(devlink);
Can this dereference a freed pointer if the lock is not actually held?
xa_find() acquires and releases the RCU read lock internally. When it returns,
the devlink pointer is no longer protected by RCU or a reference count.
If a caller invokes this helper without holding the devlink lock (which is the
exact scenario the assertion is designed to detect), another thread could
concurrently unregister and free the devlink.
If the RCU grace period expires before the assertion executes,
devl_assert_locked() would dereference freed memory. Should the lookup and
assertion be enclosed within an explicit rcu_read_lock() block to safely
assert the lock?
> + return devlink;
> +}
> +
> +void devlink_nested_in_put_unlock(struct devlink_rel *rel)
> +{
> + struct devlink *devlink = devlink_nested_in_get_locked(rel);
Could this corrupt the mutex state and cause a refcount underflow?
This helper takes a devlink_rel pointer instead of the devlink pointer
acquired by devlink_nested_in_get_lock(), and performs a secondary global
lookup to find the devlink.
If a caller mistakenly calls this in an error cleanup path where they did not
actually acquire the lock, the global xa_find() will still locate the
registered devlink. This would execute devl_unlock() and devlink_put() on a
devlink the current thread does not own.
Would it be safer for unlock/put helpers to take the exact pointer returned by
the lock/get helper to ensure safe resource cleanup?
> +
> + if (devlink) {
> + devl_unlock(devlink);
> + devlink_put(devlink);
> + }
> +}
--
pw-bot: cr
next prev parent reply other threads:[~2026-03-31 2:08 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-26 6:59 [PATCH net-next V9 00/14] devlink and mlx5: Support cross-function rate scheduling Tariq Toukan
2026-03-26 6:59 ` [PATCH net-next V9 01/14] devlink: Update nested instance locking comment Tariq Toukan
2026-03-26 6:59 ` [PATCH net-next V9 02/14] devlink: Add helpers to lock nested-in instances Tariq Toukan
2026-03-31 2:08 ` Jakub Kicinski [this message]
2026-03-31 12:20 ` Cosmin Ratiu
2026-03-31 23:55 ` Jacob Keller
2026-04-01 10:22 ` Cosmin Ratiu
2026-04-01 20:18 ` Jacob Keller
2026-03-26 6:59 ` [PATCH net-next V9 03/14] devlink: Migrate from info->user_ptr to info->ctx Tariq Toukan
2026-03-26 6:59 ` [PATCH net-next V9 04/14] devlink: Decouple rate storage from associated devlink object Tariq Toukan
2026-03-31 2:08 ` Jakub Kicinski
2026-03-31 12:28 ` Cosmin Ratiu
2026-03-26 6:59 ` [PATCH net-next V9 05/14] devlink: Add parent dev to devlink API Tariq Toukan
2026-03-26 6:59 ` [PATCH net-next V9 06/14] devlink: Allow parent dev for rate-set and rate-new Tariq Toukan
2026-03-26 6:59 ` [PATCH net-next V9 07/14] devlink: Allow rate node parents from other devlinks Tariq Toukan
2026-03-31 2:08 ` Jakub Kicinski
2026-03-31 12:44 ` Cosmin Ratiu
2026-03-26 6:59 ` [PATCH net-next V9 08/14] net/mlx5: qos: Use mlx5_lag_query_bond_speed to query LAG speed Tariq Toukan
2026-03-26 6:59 ` [PATCH net-next V9 09/14] net/mlx5: qos: Expose a function to clear a vport's parent Tariq Toukan
2026-03-26 6:59 ` [PATCH net-next V9 10/14] net/mlx5: qos: Model the root node in the scheduling hierarchy Tariq Toukan
2026-03-26 6:59 ` [PATCH net-next V9 11/14] net/mlx5: qos: Remove qos domains and use shd lock Tariq Toukan
2026-03-31 2:08 ` Jakub Kicinski
2026-03-31 12:53 ` Cosmin Ratiu
2026-03-31 16:37 ` Cosmin Ratiu
2026-03-26 6:59 ` [PATCH net-next V9 12/14] net/mlx5: qos: Support cross-device tx scheduling Tariq Toukan
2026-03-31 2:08 ` Jakub Kicinski
2026-03-31 12:57 ` Cosmin Ratiu
2026-03-26 6:59 ` [PATCH net-next V9 13/14] selftests: drv-net: Add test for cross-esw rate scheduling Tariq Toukan
2026-03-26 6:59 ` [PATCH net-next V9 14/14] net/mlx5: Document devlink rates Tariq Toukan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260331020807.3524811-1-kuba@kernel.org \
--to=kuba@kernel.org \
--cc=ajayachandra@nvidia.com \
--cc=allison.henderson@oracle.com \
--cc=andrew+netdev@lunn.ch \
--cc=antonio@openvpn.net \
--cc=chuck.lever@oracle.com \
--cc=cjubran@nvidia.com \
--cc=corbet@lwn.net \
--cc=cratiu@nvidia.com \
--cc=daniel.zahka@gmail.com \
--cc=daniel@iogearbox.net \
--cc=danielj@nvidia.com \
--cc=davem@davemloft.net \
--cc=donald.hunter@gmail.com \
--cc=dtatulea@nvidia.com \
--cc=dw@davidwei.uk \
--cc=edumazet@google.com \
--cc=gal@nvidia.com \
--cc=horms@kernel.org \
--cc=jacob.e.keller@intel.com \
--cc=jiri@nvidia.com \
--cc=jiri@resnulli.us \
--cc=joe@dama.to \
--cc=kees@kernel.org \
--cc=leon@kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=matttbe@kernel.org \
--cc=mbloch@nvidia.com \
--cc=minhquangbui99@gmail.com \
--cc=moshe@nvidia.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=noren@nvidia.com \
--cc=pabeni@redhat.com \
--cc=parav@nvidia.com \
--cc=petrm@nvidia.com \
--cc=razor@blackwall.org \
--cc=saeedm@nvidia.com \
--cc=sdf@fomichev.me \
--cc=shayd@nvidia.com \
--cc=shshitrit@nvidia.com \
--cc=shuah@kernel.org \
--cc=skhan@linuxfoundation.org \
--cc=tariqt@nvidia.com \
--cc=vadim.fedorenko@linux.dev \
--cc=willemb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox