From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: Jacob Keller <jacob.e.keller@intel.com>,
netdev <netdev@vger.kernel.org>,
David Miller <davem@davemloft.net>,
Sujai Buvaneswaran <sujai.buvaneswaran@intel.com>,
Jiri Pirko <jiri@resnulli.us>
Subject: Re: [PATCH v2 3/7] ice: move devlink locking outside the port creation
Date: Fri, 7 Jun 2024 07:10:13 +0200 [thread overview]
Message-ID: <ZmKWNbY1V+ZvP/qX@mev-dev> (raw)
In-Reply-To: <20240606175634.2e42fca8@kernel.org>
On Thu, Jun 06, 2024 at 05:56:34PM -0700, Jakub Kicinski wrote:
> On Wed, 05 Jun 2024 13:40:43 -0700 Jacob Keller wrote:
> > From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
> >
> > In case of subfunction lock will be taken for whole port creation. Do
> > the same in VF case.
>
> No interactions with other locks worth mentioning?
>
You right, I could have mentioned also removing path. The patch is only
about devlink lock during port representor creation / removing.
> > diff --git a/drivers/net/ethernet/intel/ice/devlink/devlink.c b/drivers/net/ethernet/intel/ice/devlink/devlink.c
> > index 704e9ad5144e..f774781ab514 100644
> > --- a/drivers/net/ethernet/intel/ice/devlink/devlink.c
> > +++ b/drivers/net/ethernet/intel/ice/devlink/devlink.c
> > @@ -794,10 +794,8 @@ int ice_devlink_rate_init_tx_topology(struct devlink *devlink, struct ice_vsi *v
> >
> > tc_node = pi->root->children[0];
> > mutex_lock(&pi->sched_lock);
> > - devl_lock(devlink);
> > for (i = 0; i < tc_node->num_children; i++)
> > ice_traverse_tx_tree(devlink, tc_node->children[i], tc_node, pf);
> > - devl_unlock(devlink);
> > mutex_unlock(&pi->sched_lock);
>
> Like this didn't use to cause a deadlock?
>
> Seems ice_devlink_rate_node_del() takes this lock and it's already
> holding the devlink instance lock.
ice_devlink_rate_init_tx_topology() wasn't (till now) called with
devlink lock, because it is called from port representor creation flow,
not from the devlink.
Thanks,
Michal
next prev parent reply other threads:[~2024-06-07 5:11 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-05 20:40 [PATCH v2 0/7] Intel Wired LAN Driver Updates 2024-06-03 Jacob Keller
2024-06-05 20:40 ` [PATCH v2 1/7] net: intel: Use *-y instead of *-objs in Makefile Jacob Keller
2024-06-05 20:40 ` [PATCH v2 2/7] ice: store representor ID in bridge port Jacob Keller
2024-06-07 0:50 ` Jakub Kicinski
2024-06-07 5:13 ` Michal Swiatkowski
2024-06-05 20:40 ` [PATCH v2 3/7] ice: move devlink locking outside the port creation Jacob Keller
2024-06-07 0:56 ` Jakub Kicinski
2024-06-07 5:10 ` Michal Swiatkowski [this message]
2024-06-07 21:20 ` Jacob Keller
2024-06-10 7:20 ` Michal Swiatkowski
2024-06-05 20:40 ` [PATCH v2 4/7] ice: move VSI configuration outside repr setup Jacob Keller
2024-06-05 20:40 ` [PATCH v2 5/7] ice: update representor when VSI is ready Jacob Keller
2024-06-05 20:40 ` [PATCH v2 6/7] ice: add and use roundup_u64 instead of open coding equivalent Jacob Keller
2024-06-05 20:40 ` [PATCH v2 7/7] ice: use irq_update_affinity_hint() Jacob Keller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZmKWNbY1V+ZvP/qX@mev-dev \
--to=michal.swiatkowski@linux.intel.com \
--cc=davem@davemloft.net \
--cc=jacob.e.keller@intel.com \
--cc=jiri@resnulli.us \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=sujai.buvaneswaran@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).