netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jiri Pirko <jiri@resnulli.us>
To: Jakub Kicinski <kuba@kernel.org>
Cc: Ido Schimmel <idosch@nvidia.com>,
	netdev@vger.kernel.org, davem@davemloft.net, petrm@nvidia.com,
	pabeni@redhat.com, edumazet@google.com, mlxsw@nvidia.com,
	saeedm@nvidia.com
Subject: Re: [patch net-next RFC 0/2] net: devlink: remove devlink big lock
Date: Tue, 28 Jun 2022 09:04:11 +0200	[thread overview]
Message-ID: <Yrqn6zM/kYVpc+Cg@nanopsycho> (raw)
In-Reply-To: <YrqgkKxHReC6evao@nanopsycho>

Tue, Jun 28, 2022 at 08:32:49AM CEST, jiri@resnulli.us wrote:
>Mon, Jun 27, 2022 at 07:49:45PM CEST, kuba@kernel.org wrote:
>>On Mon, 27 Jun 2022 18:41:31 +0300 Ido Schimmel wrote:
>>> On Mon, Jun 27, 2022 at 03:54:59PM +0200, Jiri Pirko wrote:
>>> > This is an attempt to remove use of devlink_mutex. This is a global lock
>>> > taken for every user command. That causes that long operations performed
>>> > on one devlink instance (like flash update) are blocking other
>>> > operations on different instances.  
>>> 
>>> This patchset is supposed to prevent one devlink instance from blocking
>>> another? Devlink does not enable "parallel_ops", which means that the
>>> generic netlink mutex is serializing all user space operations. AFAICT,
>>> this series does not enable "parallel_ops", so I'm not sure what
>>> difference the removal of the devlink mutex makes.
>>> 
>>> The devlink mutex (in accordance with the comment above it) serializes
>>> all user space operations and accesses to the devlink devices list. This
>>> resulted in a AA deadlock in the previous submission because we had a
>>> flow where a user space operation (which acquires this mutex) also tries
>>> to register / unregister a nested devlink instance which also tries to
>>> acquire the mutex.
>>> 
>>> As long as devlink does not implement "parallel_ops", it seems that the
>>> devlink mutex can be reduced to only serializing accesses to the devlink
>>> devices list, thereby eliminating the deadlock.
>>
>>I'm unclear on why we can't wait for mlx5 locking rework which will
>
>Sure we can, no rush.
>
>>allow us to move completely to per-instance locks. Do you have extra
>>insights into how that work is progressing? I was hoping that it will
>
>It's under internal review afaik.
>
>>be complete in the next two months. 
>
>What do you mean exactly? Is that that we would be okay just with
>devlink->lock? I don't think so. We need user lock because we can't take
>devlink->lock for port split and reload. devlink_mutex protects that now,

Okay, I take back port split, that is already fixed.
Moshe is taking care of the reset (port_new/del, reporter_*). I will
check out the reload. One we have that, you are correct, we are fine
with devlink->lock instance lock.

Thanks!


>the devlink->cmd_lock I'm introducing here just replaces devlink_mutex.
>If we can do without, that is fine. I just can't see how.
>Also, I don't see the relation to mlx5 work. What is that?

      reply	other threads:[~2022-06-28  7:04 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-27 13:54 [patch net-next RFC 0/2] net: devlink: remove devlink big lock Jiri Pirko
2022-06-27 13:55 ` [patch net-next RFC 1/2] net: devlink: make sure that devlink_try_get() works with valid pointer during xarray iteration Jiri Pirko
2022-06-27 13:55 ` [patch net-next RFC 2/2] net: devlink: replace devlink_mutex by per-devlink lock Jiri Pirko
2022-06-27 15:41 ` [patch net-next RFC 0/2] net: devlink: remove devlink big lock Ido Schimmel
2022-06-27 15:55   ` Jiri Pirko
2022-06-28  7:43     ` Ido Schimmel
2022-06-29 10:25       ` Jiri Pirko
2022-06-29 10:36         ` Jiri Pirko
2022-06-29 11:30           ` Ido Schimmel
2022-06-29 11:47             ` Jiri Pirko
2022-06-27 17:49   ` Jakub Kicinski
2022-06-28  6:32     ` Jiri Pirko
2022-06-28  7:04       ` Jiri Pirko [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yrqn6zM/kYVpc+Cg@nanopsycho \
    --to=jiri@resnulli.us \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=idosch@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=mlxsw@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=petrm@nvidia.com \
    --cc=saeedm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).