* [PATCH net-next v1 1/4] net: create netdev_nl_sock to wrap bindings list
2025-03-07 15:57 [PATCH net-next v1 0/4] net: remove rtnl_lock from the callers of queue APIs Stanislav Fomichev
@ 2025-03-07 15:57 ` Stanislav Fomichev
2025-03-07 15:57 ` [PATCH net-next v1 2/4] net: protect net_devmem_dmabuf_bindings by new net_devmem_bindings_mutex Stanislav Fomichev
` (3 subsequent siblings)
4 siblings, 0 replies; 19+ messages in thread
From: Stanislav Fomichev @ 2025-03-07 15:57 UTC (permalink / raw)
To: netdev
Cc: davem, edumazet, kuba, pabeni, linux-kernel, horms, donald.hunter,
michael.chan, pavan.chebbi, andrew+netdev, jdamato, sdf, xuanzhuo,
almasrymina, asml.silence, dw
No functional changes. Next patches will add more granular locking
to netdev_nl_sock.
Cc: Mina Almasry <almasrymina@google.com>
Signed-off-by: Stanislav Fomichev <sdf@fomichev.me>
---
Documentation/netlink/specs/netdev.yaml | 4 ++--
include/net/netdev_netlink.h | 11 +++++++++++
net/core/netdev-genl-gen.c | 4 ++--
net/core/netdev-genl-gen.h | 6 +++---
net/core/netdev-genl.c | 19 +++++++++----------
5 files changed, 27 insertions(+), 17 deletions(-)
create mode 100644 include/net/netdev_netlink.h
diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index 36f1152bfac3..f5e0750ab71d 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -745,8 +745,8 @@ name: netdev
- irq-suspend-timeout
kernel-family:
- headers: [ "linux/list.h"]
- sock-priv: struct list_head
+ headers: [ "net/netdev_netlink.h"]
+ sock-priv: struct netdev_nl_sock
mcast-groups:
list:
diff --git a/include/net/netdev_netlink.h b/include/net/netdev_netlink.h
new file mode 100644
index 000000000000..1599573d35c9
--- /dev/null
+++ b/include/net/netdev_netlink.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __NET_NETDEV_NETLINK_H
+#define __NET_NETDEV_NETLINK_H
+
+#include <linux/list.h>
+
+struct netdev_nl_sock {
+ struct list_head bindings;
+};
+
+#endif /* __NET_NETDEV_NETLINK_H */
diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
index 996ac6a449eb..739f7b6506a6 100644
--- a/net/core/netdev-genl-gen.c
+++ b/net/core/netdev-genl-gen.c
@@ -9,7 +9,7 @@
#include "netdev-genl-gen.h"
#include <uapi/linux/netdev.h>
-#include <linux/list.h>
+#include <net/netdev_netlink.h>
/* Integer value ranges */
static const struct netlink_range_validation netdev_a_page_pool_id_range = {
@@ -217,7 +217,7 @@ struct genl_family netdev_nl_family __ro_after_init = {
.n_split_ops = ARRAY_SIZE(netdev_nl_ops),
.mcgrps = netdev_nl_mcgrps,
.n_mcgrps = ARRAY_SIZE(netdev_nl_mcgrps),
- .sock_priv_size = sizeof(struct list_head),
+ .sock_priv_size = sizeof(struct netdev_nl_sock),
.sock_priv_init = __netdev_nl_sock_priv_init,
.sock_priv_destroy = __netdev_nl_sock_priv_destroy,
};
diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h
index e09dd7539ff2..17d39fd64c94 100644
--- a/net/core/netdev-genl-gen.h
+++ b/net/core/netdev-genl-gen.h
@@ -10,7 +10,7 @@
#include <net/genetlink.h>
#include <uapi/linux/netdev.h>
-#include <linux/list.h>
+#include <net/netdev_netlink.h>
/* Common nested types */
extern const struct nla_policy netdev_page_pool_info_nl_policy[NETDEV_A_PAGE_POOL_IFINDEX + 1];
@@ -42,7 +42,7 @@ enum {
extern struct genl_family netdev_nl_family;
-void netdev_nl_sock_priv_init(struct list_head *priv);
-void netdev_nl_sock_priv_destroy(struct list_head *priv);
+void netdev_nl_sock_priv_init(struct netdev_nl_sock *priv);
+void netdev_nl_sock_priv_destroy(struct netdev_nl_sock *priv);
#endif /* _LINUX_NETDEV_GEN_H */
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index 2b774183d31c..a219be90c739 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -829,8 +829,8 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
{
struct nlattr *tb[ARRAY_SIZE(netdev_queue_id_nl_policy)];
struct net_devmem_dmabuf_binding *binding;
- struct list_head *sock_binding_list;
u32 ifindex, dmabuf_fd, rxq_idx;
+ struct netdev_nl_sock *priv;
struct net_device *netdev;
struct sk_buff *rsp;
struct nlattr *attr;
@@ -845,10 +845,9 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]);
dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_DMABUF_FD]);
- sock_binding_list = genl_sk_priv_get(&netdev_nl_family,
- NETLINK_CB(skb).sk);
- if (IS_ERR(sock_binding_list))
- return PTR_ERR(sock_binding_list);
+ priv = genl_sk_priv_get(&netdev_nl_family, NETLINK_CB(skb).sk);
+ if (IS_ERR(priv))
+ return PTR_ERR(priv);
rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
if (!rsp)
@@ -909,7 +908,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
goto err_unbind;
}
- list_add(&binding->list, sock_binding_list);
+ list_add(&binding->list, &priv->bindings);
nla_put_u32(rsp, NETDEV_A_DMABUF_ID, binding->id);
genlmsg_end(rsp, hdr);
@@ -931,17 +930,17 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
return err;
}
-void netdev_nl_sock_priv_init(struct list_head *priv)
+void netdev_nl_sock_priv_init(struct netdev_nl_sock *priv)
{
- INIT_LIST_HEAD(priv);
+ INIT_LIST_HEAD(&priv->bindings);
}
-void netdev_nl_sock_priv_destroy(struct list_head *priv)
+void netdev_nl_sock_priv_destroy(struct netdev_nl_sock *priv)
{
struct net_devmem_dmabuf_binding *binding;
struct net_devmem_dmabuf_binding *temp;
- list_for_each_entry_safe(binding, temp, priv, list) {
+ list_for_each_entry_safe(binding, temp, &priv->bindings, list) {
rtnl_lock();
net_devmem_unbind_dmabuf(binding);
rtnl_unlock();
--
2.48.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH net-next v1 2/4] net: protect net_devmem_dmabuf_bindings by new net_devmem_bindings_mutex
2025-03-07 15:57 [PATCH net-next v1 0/4] net: remove rtnl_lock from the callers of queue APIs Stanislav Fomichev
2025-03-07 15:57 ` [PATCH net-next v1 1/4] net: create netdev_nl_sock to wrap bindings list Stanislav Fomichev
@ 2025-03-07 15:57 ` Stanislav Fomichev
2025-03-07 17:49 ` Jakub Kicinski
2025-03-07 15:57 ` [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket Stanislav Fomichev
` (2 subsequent siblings)
4 siblings, 1 reply; 19+ messages in thread
From: Stanislav Fomichev @ 2025-03-07 15:57 UTC (permalink / raw)
To: netdev
Cc: davem, edumazet, kuba, pabeni, linux-kernel, horms, donald.hunter,
michael.chan, pavan.chebbi, andrew+netdev, jdamato, sdf, xuanzhuo,
almasrymina, asml.silence, dw
In the process of making queue management API rtnl_lock-less, we
need a separate lock to protect xa that keeps a global list of bindings.
Also change the ordering of 'posting' binding to
net_devmem_dmabuf_bindings: xa_alloc is done after binding is fully
initialized (so xa_load lookups fully instantiated bindings) and
xa_erase is done as a first step during unbind.
Cc: Mina Almasry <almasrymina@google.com>
Signed-off-by: Stanislav Fomichev <sdf@fomichev.me>
---
net/core/devmem.c | 29 ++++++++++++++---------------
1 file changed, 14 insertions(+), 15 deletions(-)
diff --git a/net/core/devmem.c b/net/core/devmem.c
index 7c6e0b5b6acb..c16cdac46bed 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -25,7 +25,7 @@
/* Device memory support */
-/* Protected by rtnl_lock() */
+static DEFINE_MUTEX(net_devmem_bindings_mutex);
static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1);
static const struct memory_provider_ops dmabuf_devmem_ops;
@@ -119,6 +119,10 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
unsigned long xa_idx;
unsigned int rxq_idx;
+ mutex_lock(&net_devmem_bindings_mutex);
+ xa_erase(&net_devmem_dmabuf_bindings, binding->id);
+ mutex_unlock(&net_devmem_bindings_mutex);
+
if (binding->list.next)
list_del(&binding->list);
@@ -133,8 +137,6 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx));
}
- xa_erase(&net_devmem_dmabuf_bindings, binding->id);
-
net_devmem_dmabuf_binding_put(binding);
}
@@ -220,24 +222,15 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
}
binding->dev = dev;
-
- err = xa_alloc_cyclic(&net_devmem_dmabuf_bindings, &binding->id,
- binding, xa_limit_32b, &id_alloc_next,
- GFP_KERNEL);
- if (err < 0)
- goto err_free_binding;
-
xa_init_flags(&binding->bound_rxqs, XA_FLAGS_ALLOC);
-
refcount_set(&binding->ref, 1);
-
binding->dmabuf = dmabuf;
binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent);
if (IS_ERR(binding->attachment)) {
err = PTR_ERR(binding->attachment);
NL_SET_ERR_MSG(extack, "Failed to bind dmabuf to device");
- goto err_free_id;
+ goto err_free_binding;
}
binding->sgt = dma_buf_map_attachment_unlocked(binding->attachment,
@@ -305,6 +298,14 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
virtual += len;
}
+ mutex_lock(&net_devmem_bindings_mutex);
+ err = xa_alloc_cyclic(&net_devmem_dmabuf_bindings, &binding->id,
+ binding, xa_limit_32b, &id_alloc_next,
+ GFP_KERNEL);
+ mutex_unlock(&net_devmem_bindings_mutex);
+ if (err < 0)
+ goto err_free_chunks;
+
return binding;
err_free_chunks:
@@ -316,8 +317,6 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
DMA_FROM_DEVICE);
err_detach:
dma_buf_detach(dmabuf, binding->attachment);
-err_free_id:
- xa_erase(&net_devmem_dmabuf_bindings, binding->id);
err_free_binding:
kfree(binding);
err_put_dmabuf:
--
2.48.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH net-next v1 2/4] net: protect net_devmem_dmabuf_bindings by new net_devmem_bindings_mutex
2025-03-07 15:57 ` [PATCH net-next v1 2/4] net: protect net_devmem_dmabuf_bindings by new net_devmem_bindings_mutex Stanislav Fomichev
@ 2025-03-07 17:49 ` Jakub Kicinski
2025-03-07 19:36 ` Stanislav Fomichev
0 siblings, 1 reply; 19+ messages in thread
From: Jakub Kicinski @ 2025-03-07 17:49 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: netdev, davem, edumazet, pabeni, linux-kernel, horms,
donald.hunter, michael.chan, pavan.chebbi, andrew+netdev, jdamato,
xuanzhuo, almasrymina, asml.silence, dw
On Fri, 7 Mar 2025 07:57:23 -0800 Stanislav Fomichev wrote:
> In the process of making queue management API rtnl_lock-less, we
> need a separate lock to protect xa that keeps a global list of bindings.
>
> Also change the ordering of 'posting' binding to
> net_devmem_dmabuf_bindings: xa_alloc is done after binding is fully
> initialized (so xa_load lookups fully instantiated bindings) and
> xa_erase is done as a first step during unbind.
You're just wrapping the calls to xarray here, is there a plan to use
this new lock for other things? xarray has a built in spin lock, we
don't have to protect it.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next v1 2/4] net: protect net_devmem_dmabuf_bindings by new net_devmem_bindings_mutex
2025-03-07 17:49 ` Jakub Kicinski
@ 2025-03-07 19:36 ` Stanislav Fomichev
0 siblings, 0 replies; 19+ messages in thread
From: Stanislav Fomichev @ 2025-03-07 19:36 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Stanislav Fomichev, netdev, davem, edumazet, pabeni, linux-kernel,
horms, donald.hunter, michael.chan, pavan.chebbi, andrew+netdev,
jdamato, xuanzhuo, almasrymina, asml.silence, dw
On 03/07, Jakub Kicinski wrote:
> On Fri, 7 Mar 2025 07:57:23 -0800 Stanislav Fomichev wrote:
> > In the process of making queue management API rtnl_lock-less, we
> > need a separate lock to protect xa that keeps a global list of bindings.
> >
> > Also change the ordering of 'posting' binding to
> > net_devmem_dmabuf_bindings: xa_alloc is done after binding is fully
> > initialized (so xa_load lookups fully instantiated bindings) and
> > xa_erase is done as a first step during unbind.
>
> You're just wrapping the calls to xarray here, is there a plan to use
> this new lock for other things? xarray has a built in spin lock, we
> don't have to protect it.
Oh, that is true, I completely missed that. In this case I can drop this
patch, thanks!
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket
2025-03-07 15:57 [PATCH net-next v1 0/4] net: remove rtnl_lock from the callers of queue APIs Stanislav Fomichev
2025-03-07 15:57 ` [PATCH net-next v1 1/4] net: create netdev_nl_sock to wrap bindings list Stanislav Fomichev
2025-03-07 15:57 ` [PATCH net-next v1 2/4] net: protect net_devmem_dmabuf_bindings by new net_devmem_bindings_mutex Stanislav Fomichev
@ 2025-03-07 15:57 ` Stanislav Fomichev
2025-03-07 17:50 ` Jakub Kicinski
2025-03-07 23:34 ` Jakub Kicinski
2025-03-07 15:57 ` [PATCH net-next v1 4/4] net: drop rtnl_lock for queue_mgmt operations Stanislav Fomichev
2025-03-08 3:22 ` [PATCH net-next v1 0/4] net: remove rtnl_lock from the callers of queue APIs Jakub Kicinski
4 siblings, 2 replies; 19+ messages in thread
From: Stanislav Fomichev @ 2025-03-07 15:57 UTC (permalink / raw)
To: netdev
Cc: davem, edumazet, kuba, pabeni, linux-kernel, horms, donald.hunter,
michael.chan, pavan.chebbi, andrew+netdev, jdamato, sdf, xuanzhuo,
almasrymina, asml.silence, dw
As we move away from rtnl_lock for queue ops, introduce
per-netdev_nl_sock lock.
Cc: Mina Almasry <almasrymina@google.com>
Signed-off-by: Stanislav Fomichev <sdf@fomichev.me>
---
include/net/netdev_netlink.h | 1 +
net/core/netdev-genl.c | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/include/net/netdev_netlink.h b/include/net/netdev_netlink.h
index 1599573d35c9..075962dbe743 100644
--- a/include/net/netdev_netlink.h
+++ b/include/net/netdev_netlink.h
@@ -5,6 +5,7 @@
#include <linux/list.h>
struct netdev_nl_sock {
+ struct mutex lock;
struct list_head bindings;
};
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index a219be90c739..8acdeeae24e7 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -859,6 +859,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
goto err_genlmsg_free;
}
+ mutex_lock(&priv->lock);
rtnl_lock();
netdev = __dev_get_by_index(genl_info_net(info), ifindex);
@@ -925,6 +926,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
net_devmem_unbind_dmabuf(binding);
err_unlock:
rtnl_unlock();
+ mutex_unlock(&priv->lock);
err_genlmsg_free:
nlmsg_free(rsp);
return err;
@@ -933,6 +935,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
void netdev_nl_sock_priv_init(struct netdev_nl_sock *priv)
{
INIT_LIST_HEAD(&priv->bindings);
+ mutex_init(&priv->lock);
}
void netdev_nl_sock_priv_destroy(struct netdev_nl_sock *priv)
@@ -940,11 +943,13 @@ void netdev_nl_sock_priv_destroy(struct netdev_nl_sock *priv)
struct net_devmem_dmabuf_binding *binding;
struct net_devmem_dmabuf_binding *temp;
+ mutex_lock(&priv->lock);
list_for_each_entry_safe(binding, temp, &priv->bindings, list) {
rtnl_lock();
net_devmem_unbind_dmabuf(binding);
rtnl_unlock();
}
+ mutex_unlock(&priv->lock);
}
static int netdev_genl_netdevice_event(struct notifier_block *nb,
--
2.48.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket
2025-03-07 15:57 ` [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket Stanislav Fomichev
@ 2025-03-07 17:50 ` Jakub Kicinski
2025-03-07 19:35 ` Stanislav Fomichev
2025-03-07 23:34 ` Jakub Kicinski
1 sibling, 1 reply; 19+ messages in thread
From: Jakub Kicinski @ 2025-03-07 17:50 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: netdev, davem, edumazet, pabeni, linux-kernel, horms,
donald.hunter, michael.chan, pavan.chebbi, andrew+netdev, jdamato,
xuanzhuo, almasrymina, asml.silence, dw
On Fri, 7 Mar 2025 07:57:24 -0800 Stanislav Fomichev wrote:
> As we move away from rtnl_lock for queue ops, introduce
> per-netdev_nl_sock lock.
What is it protecting?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket
2025-03-07 17:50 ` Jakub Kicinski
@ 2025-03-07 19:35 ` Stanislav Fomichev
2025-03-07 23:33 ` Jakub Kicinski
0 siblings, 1 reply; 19+ messages in thread
From: Stanislav Fomichev @ 2025-03-07 19:35 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Stanislav Fomichev, netdev, davem, edumazet, pabeni, linux-kernel,
horms, donald.hunter, michael.chan, pavan.chebbi, andrew+netdev,
jdamato, xuanzhuo, almasrymina, asml.silence, dw
On 03/07, Jakub Kicinski wrote:
> On Fri, 7 Mar 2025 07:57:24 -0800 Stanislav Fomichev wrote:
> > As we move away from rtnl_lock for queue ops, introduce
> > per-netdev_nl_sock lock.
>
> What is it protecting?
The 'bindings' field of the netlink socket:
struct netdev_nl_sock {
struct mutex lock;
struct list_head bindings; <<<
};
I'm assuming it's totally valid to have several bindings per socket?
(attached to different rx queues)
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket
2025-03-07 19:35 ` Stanislav Fomichev
@ 2025-03-07 23:33 ` Jakub Kicinski
0 siblings, 0 replies; 19+ messages in thread
From: Jakub Kicinski @ 2025-03-07 23:33 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: Stanislav Fomichev, netdev, davem, edumazet, pabeni, linux-kernel,
horms, donald.hunter, michael.chan, pavan.chebbi, andrew+netdev,
jdamato, xuanzhuo, almasrymina, asml.silence, dw
On Fri, 7 Mar 2025 11:35:23 -0800 Stanislav Fomichev wrote:
> On 03/07, Jakub Kicinski wrote:
> > On Fri, 7 Mar 2025 07:57:24 -0800 Stanislav Fomichev wrote:
> > > As we move away from rtnl_lock for queue ops, introduce
> > > per-netdev_nl_sock lock.
> >
> > What is it protecting?
>
> The 'bindings' field of the netlink socket:
>
> struct netdev_nl_sock {
> struct mutex lock;
> struct list_head bindings; <<<
> };
>
> I'm assuming it's totally valid to have several bindings per socket?
Totally, sorry, I got confused by there being two xarrays.
Lock on the socket state makes sense.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket
2025-03-07 15:57 ` [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket Stanislav Fomichev
2025-03-07 17:50 ` Jakub Kicinski
@ 2025-03-07 23:34 ` Jakub Kicinski
2025-03-07 23:43 ` Stanislav Fomichev
1 sibling, 1 reply; 19+ messages in thread
From: Jakub Kicinski @ 2025-03-07 23:34 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: netdev, davem, edumazet, pabeni, linux-kernel, horms,
donald.hunter, michael.chan, pavan.chebbi, andrew+netdev, jdamato,
xuanzhuo, almasrymina, asml.silence, dw
On Fri, 7 Mar 2025 07:57:24 -0800 Stanislav Fomichev wrote:
> diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
> index a219be90c739..8acdeeae24e7 100644
> --- a/net/core/netdev-genl.c
> +++ b/net/core/netdev-genl.c
> @@ -859,6 +859,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> goto err_genlmsg_free;
> }
>
> + mutex_lock(&priv->lock);
> rtnl_lock();
>
> netdev = __dev_get_by_index(genl_info_net(info), ifindex);
> @@ -925,6 +926,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> net_devmem_unbind_dmabuf(binding);
> err_unlock:
> rtnl_unlock();
> + mutex_unlock(&priv->lock);
> err_genlmsg_free:
> nlmsg_free(rsp);
> return err;
I think you're missing an unlock before successful return here no?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket
2025-03-07 23:34 ` Jakub Kicinski
@ 2025-03-07 23:43 ` Stanislav Fomichev
2025-03-09 21:57 ` Mina Almasry
0 siblings, 1 reply; 19+ messages in thread
From: Stanislav Fomichev @ 2025-03-07 23:43 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Stanislav Fomichev, netdev, davem, edumazet, pabeni, linux-kernel,
horms, donald.hunter, michael.chan, pavan.chebbi, andrew+netdev,
jdamato, xuanzhuo, almasrymina, asml.silence, dw
On 03/07, Jakub Kicinski wrote:
> On Fri, 7 Mar 2025 07:57:24 -0800 Stanislav Fomichev wrote:
> > diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
> > index a219be90c739..8acdeeae24e7 100644
> > --- a/net/core/netdev-genl.c
> > +++ b/net/core/netdev-genl.c
> > @@ -859,6 +859,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> > goto err_genlmsg_free;
> > }
> >
> > + mutex_lock(&priv->lock);
> > rtnl_lock();
> >
> > netdev = __dev_get_by_index(genl_info_net(info), ifindex);
> > @@ -925,6 +926,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> > net_devmem_unbind_dmabuf(binding);
> > err_unlock:
> > rtnl_unlock();
> > + mutex_unlock(&priv->lock);
> > err_genlmsg_free:
> > nlmsg_free(rsp);
> > return err;
>
> I think you're missing an unlock before successful return here no?
Yes, thanks! :-( I have tested some of this code with Mina's latest TX + my
loopback mode, but it doesn't have any RX tests.. Will try to hack
something together to run RX bind before I repost.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket
2025-03-07 23:43 ` Stanislav Fomichev
@ 2025-03-09 21:57 ` Mina Almasry
2025-03-10 5:02 ` Stanislav Fomichev
0 siblings, 1 reply; 19+ messages in thread
From: Mina Almasry @ 2025-03-09 21:57 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: Jakub Kicinski, Stanislav Fomichev, netdev, davem, edumazet,
pabeni, linux-kernel, horms, donald.hunter, michael.chan,
pavan.chebbi, andrew+netdev, jdamato, xuanzhuo, asml.silence, dw
On Fri, Mar 7, 2025 at 3:43 PM Stanislav Fomichev <stfomichev@gmail.com> wrote:
>
> On 03/07, Jakub Kicinski wrote:
> > On Fri, 7 Mar 2025 07:57:24 -0800 Stanislav Fomichev wrote:
> > > diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
> > > index a219be90c739..8acdeeae24e7 100644
> > > --- a/net/core/netdev-genl.c
> > > +++ b/net/core/netdev-genl.c
> > > @@ -859,6 +859,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> > > goto err_genlmsg_free;
> > > }
> > >
> > > + mutex_lock(&priv->lock);
> > > rtnl_lock();
> > >
> > > netdev = __dev_get_by_index(genl_info_net(info), ifindex);
> > > @@ -925,6 +926,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> > > net_devmem_unbind_dmabuf(binding);
> > > err_unlock:
> > > rtnl_unlock();
> > > + mutex_unlock(&priv->lock);
> > > err_genlmsg_free:
> > > nlmsg_free(rsp);
> > > return err;
> >
> > I think you're missing an unlock before successful return here no?
>
> Yes, thanks! :-( I have tested some of this code with Mina's latest TX + my
> loopback mode, but it doesn't have any RX tests.. Will try to hack
> something together to run RX bind before I repost.
Is the existing RX test not working for you?
Also running `./ncdevmem` manually on a driver you have that supports
devmem will test the binding patch.
I wonder if we can change list_head to xarray, which manages its own
locking, instead of list_head plus manual locking. Just an idea, I
don't have a strong preference here. It may be annoying that xarray do
lookups by an index, so we have to store the index somewhere. But if
all we do here is add to the xarray and later loop over it to unbind
elements, we don't need to store the indexes anywhere.
--
Thanks,
Mina
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket
2025-03-09 21:57 ` Mina Almasry
@ 2025-03-10 5:02 ` Stanislav Fomichev
2025-03-10 5:45 ` Mina Almasry
0 siblings, 1 reply; 19+ messages in thread
From: Stanislav Fomichev @ 2025-03-10 5:02 UTC (permalink / raw)
To: Mina Almasry
Cc: Jakub Kicinski, Stanislav Fomichev, netdev, davem, edumazet,
pabeni, linux-kernel, horms, donald.hunter, michael.chan,
pavan.chebbi, andrew+netdev, jdamato, xuanzhuo, asml.silence, dw
On 03/09, Mina Almasry wrote:
> On Fri, Mar 7, 2025 at 3:43 PM Stanislav Fomichev <stfomichev@gmail.com> wrote:
> >
> > On 03/07, Jakub Kicinski wrote:
> > > On Fri, 7 Mar 2025 07:57:24 -0800 Stanislav Fomichev wrote:
> > > > diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
> > > > index a219be90c739..8acdeeae24e7 100644
> > > > --- a/net/core/netdev-genl.c
> > > > +++ b/net/core/netdev-genl.c
> > > > @@ -859,6 +859,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> > > > goto err_genlmsg_free;
> > > > }
> > > >
> > > > + mutex_lock(&priv->lock);
> > > > rtnl_lock();
> > > >
> > > > netdev = __dev_get_by_index(genl_info_net(info), ifindex);
> > > > @@ -925,6 +926,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> > > > net_devmem_unbind_dmabuf(binding);
> > > > err_unlock:
> > > > rtnl_unlock();
> > > > + mutex_unlock(&priv->lock);
> > > > err_genlmsg_free:
> > > > nlmsg_free(rsp);
> > > > return err;
> > >
> > > I think you're missing an unlock before successful return here no?
> >
> > Yes, thanks! :-( I have tested some of this code with Mina's latest TX + my
> > loopback mode, but it doesn't have any RX tests.. Will try to hack
> > something together to run RX bind before I repost.
>
> Is the existing RX test not working for you?
>
> Also running `./ncdevmem` manually on a driver you have that supports
> devmem will test the binding patch.
It's a bit of a pita to run everything right now since drivers are
not in the tree :-(
> I wonder if we can change list_head to xarray, which manages its own
> locking, instead of list_head plus manual locking. Just an idea, I
> don't have a strong preference here. It may be annoying that xarray do
> lookups by an index, so we have to store the index somewhere. But if
> all we do here is add to the xarray and later loop over it to unbind
> elements, we don't need to store the indexes anywhere.
Yeah, having to keep the index around might be a bit awkward. And
since this is not a particularly performance sensitive place, let's
keep it as is for now?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket
2025-03-10 5:02 ` Stanislav Fomichev
@ 2025-03-10 5:45 ` Mina Almasry
0 siblings, 0 replies; 19+ messages in thread
From: Mina Almasry @ 2025-03-10 5:45 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: Jakub Kicinski, Stanislav Fomichev, netdev, davem, edumazet,
pabeni, linux-kernel, horms, donald.hunter, michael.chan,
pavan.chebbi, andrew+netdev, jdamato, xuanzhuo, asml.silence, dw
On Sun, Mar 9, 2025 at 10:02 PM Stanislav Fomichev <stfomichev@gmail.com> wrote:
>
> On 03/09, Mina Almasry wrote:
> > On Fri, Mar 7, 2025 at 3:43 PM Stanislav Fomichev <stfomichev@gmail.com> wrote:
> > >
> > > On 03/07, Jakub Kicinski wrote:
> > > > On Fri, 7 Mar 2025 07:57:24 -0800 Stanislav Fomichev wrote:
> > > > > diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
> > > > > index a219be90c739..8acdeeae24e7 100644
> > > > > --- a/net/core/netdev-genl.c
> > > > > +++ b/net/core/netdev-genl.c
> > > > > @@ -859,6 +859,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> > > > > goto err_genlmsg_free;
> > > > > }
> > > > >
> > > > > + mutex_lock(&priv->lock);
> > > > > rtnl_lock();
> > > > >
> > > > > netdev = __dev_get_by_index(genl_info_net(info), ifindex);
> > > > > @@ -925,6 +926,7 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> > > > > net_devmem_unbind_dmabuf(binding);
> > > > > err_unlock:
> > > > > rtnl_unlock();
> > > > > + mutex_unlock(&priv->lock);
> > > > > err_genlmsg_free:
> > > > > nlmsg_free(rsp);
> > > > > return err;
> > > >
> > > > I think you're missing an unlock before successful return here no?
> > >
> > > Yes, thanks! :-( I have tested some of this code with Mina's latest TX + my
> > > loopback mode, but it doesn't have any RX tests.. Will try to hack
> > > something together to run RX bind before I repost.
> >
> > Is the existing RX test not working for you?
> >
> > Also running `./ncdevmem` manually on a driver you have that supports
> > devmem will test the binding patch.
>
> It's a bit of a pita to run everything right now since drivers are
> not in the tree :-(
>
> > I wonder if we can change list_head to xarray, which manages its own
> > locking, instead of list_head plus manual locking. Just an idea, I
> > don't have a strong preference here. It may be annoying that xarray do
> > lookups by an index, so we have to store the index somewhere. But if
> > all we do here is add to the xarray and later loop over it to unbind
> > elements, we don't need to store the indexes anywhere.
>
> Yeah, having to keep the index around might be a bit awkward. And
> since this is not a particularly performance sensitive place, let's
> keep it as is for now?
No strong preference from my end.
--
Thanks,
Mina
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH net-next v1 4/4] net: drop rtnl_lock for queue_mgmt operations
2025-03-07 15:57 [PATCH net-next v1 0/4] net: remove rtnl_lock from the callers of queue APIs Stanislav Fomichev
` (2 preceding siblings ...)
2025-03-07 15:57 ` [PATCH net-next v1 3/4] net: add granular lock for the netdev netlink socket Stanislav Fomichev
@ 2025-03-07 15:57 ` Stanislav Fomichev
2025-03-07 23:39 ` Jakub Kicinski
2025-03-08 3:22 ` [PATCH net-next v1 0/4] net: remove rtnl_lock from the callers of queue APIs Jakub Kicinski
4 siblings, 1 reply; 19+ messages in thread
From: Stanislav Fomichev @ 2025-03-07 15:57 UTC (permalink / raw)
To: netdev
Cc: davem, edumazet, kuba, pabeni, linux-kernel, horms, donald.hunter,
michael.chan, pavan.chebbi, andrew+netdev, jdamato, sdf, xuanzhuo,
almasrymina, asml.silence, dw
All drivers that use queue API are already converted to use
netdev instance lock. Move netdev instance lock management to
the netlink layer and drop rtnl_lock.
Cc: Mina Almasry <almasrymina@google.com>
Signed-off-by: Stanislav Fomichev <sdf@fomichev.me>
---
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 4 ++--
drivers/net/netdevsim/netdev.c | 4 ++--
net/core/devmem.c | 3 ++-
net/core/netdev-genl.c | 18 +++++-------------
net/core/netdev_rx_queue.c | 15 +++++----------
5 files changed, 16 insertions(+), 28 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 1a1e6da77777..9b936cfc1d29 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -11380,14 +11380,14 @@ static void bnxt_irq_affinity_notify(struct irq_affinity_notify *notify,
if (pcie_tph_set_st_entry(irq->bp->pdev, irq->msix_nr, tag))
return;
- rtnl_lock();
+ netdev_lock(irq->bp->dev);
if (netif_running(irq->bp->dev)) {
err = netdev_rx_queue_restart(irq->bp->dev, irq->ring_nr);
if (err)
netdev_err(irq->bp->dev,
"RX queue restart failed: err=%d\n", err);
}
- rtnl_unlock();
+ netdev_unlock(irq->bp->dev);
}
static void bnxt_irq_affinity_release(struct kref *ref)
diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
index 54d03b0628d2..1ba2ff5e4453 100644
--- a/drivers/net/netdevsim/netdev.c
+++ b/drivers/net/netdevsim/netdev.c
@@ -786,7 +786,7 @@ nsim_qreset_write(struct file *file, const char __user *data,
if (ret != 2)
return -EINVAL;
- rtnl_lock();
+ netdev_lock(ns->netdev);
if (queue >= ns->netdev->real_num_rx_queues) {
ret = -EINVAL;
goto exit_unlock;
@@ -800,7 +800,7 @@ nsim_qreset_write(struct file *file, const char __user *data,
ret = count;
exit_unlock:
- rtnl_unlock();
+ netdev_unlock(ns->netdev);
return ret;
}
diff --git a/net/core/devmem.c b/net/core/devmem.c
index c16cdac46bed..e48eb42d7377 100644
--- a/net/core/devmem.c
+++ b/net/core/devmem.c
@@ -132,9 +132,10 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
rxq->mp_params.mp_priv = NULL;
rxq->mp_params.mp_ops = NULL;
+ netdev_lock(binding->dev);
rxq_idx = get_netdev_rx_queue_index(rxq);
-
WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx));
+ netdev_unlock(binding->dev);
}
net_devmem_dmabuf_binding_put(binding);
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index 8acdeeae24e7..1de28f9e0219 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -481,8 +481,6 @@ int netdev_nl_queue_get_doit(struct sk_buff *skb, struct genl_info *info)
if (!rsp)
return -ENOMEM;
- rtnl_lock();
-
netdev = netdev_get_by_index_lock(genl_info_net(info), ifindex);
if (netdev) {
err = netdev_nl_queue_fill(rsp, netdev, q_id, q_type, info);
@@ -491,8 +489,6 @@ int netdev_nl_queue_get_doit(struct sk_buff *skb, struct genl_info *info)
err = -ENODEV;
}
- rtnl_unlock();
-
if (err)
goto err_free_msg;
@@ -541,7 +537,6 @@ int netdev_nl_queue_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
if (info->attrs[NETDEV_A_QUEUE_IFINDEX])
ifindex = nla_get_u32(info->attrs[NETDEV_A_QUEUE_IFINDEX]);
- rtnl_lock();
if (ifindex) {
netdev = netdev_get_by_index_lock(net, ifindex);
if (netdev) {
@@ -559,7 +554,6 @@ int netdev_nl_queue_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
ctx->txq_idx = 0;
}
}
- rtnl_unlock();
return err;
}
@@ -860,12 +854,11 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
}
mutex_lock(&priv->lock);
- rtnl_lock();
- netdev = __dev_get_by_index(genl_info_net(info), ifindex);
+ netdev = netdev_get_by_index_lock(genl_info_net(info), ifindex);
if (!netdev || !netif_device_present(netdev)) {
err = -ENODEV;
- goto err_unlock;
+ goto err_unlock_sock;
}
if (dev_xdp_prog_count(netdev)) {
@@ -918,14 +911,15 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
if (err)
goto err_unbind;
- rtnl_unlock();
+ netdev_unlock(netdev);
return 0;
err_unbind:
net_devmem_unbind_dmabuf(binding);
err_unlock:
- rtnl_unlock();
+ netdev_unlock(netdev);
+err_unlock_sock:
mutex_unlock(&priv->lock);
err_genlmsg_free:
nlmsg_free(rsp);
@@ -945,9 +939,7 @@ void netdev_nl_sock_priv_destroy(struct netdev_nl_sock *priv)
mutex_lock(&priv->lock);
list_for_each_entry_safe(binding, temp, &priv->bindings, list) {
- rtnl_lock();
net_devmem_unbind_dmabuf(binding);
- rtnl_unlock();
}
mutex_unlock(&priv->lock);
}
diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c
index 7419c41fd3cb..8fd5d784ac09 100644
--- a/net/core/netdev_rx_queue.c
+++ b/net/core/netdev_rx_queue.c
@@ -18,7 +18,7 @@ int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx)
!qops->ndo_queue_mem_alloc || !qops->ndo_queue_start)
return -EOPNOTSUPP;
- ASSERT_RTNL();
+ netdev_assert_locked(dev);
new_mem = kvzalloc(qops->ndo_queue_mem_size, GFP_KERNEL);
if (!new_mem)
@@ -30,8 +30,6 @@ int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx)
goto err_free_new_mem;
}
- netdev_lock(dev);
-
err = qops->ndo_queue_mem_alloc(dev, new_mem, rxq_idx);
if (err)
goto err_free_old_mem;
@@ -54,8 +52,6 @@ int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx)
qops->ndo_queue_mem_free(dev, old_mem);
- netdev_unlock(dev);
-
kvfree(old_mem);
kvfree(new_mem);
@@ -80,7 +76,6 @@ int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx)
qops->ndo_queue_mem_free(dev, new_mem);
err_free_old_mem:
- netdev_unlock(dev);
kvfree(old_mem);
err_free_new_mem:
@@ -118,9 +113,9 @@ int net_mp_open_rxq(struct net_device *dev, unsigned ifq_idx,
{
int ret;
- rtnl_lock();
+ netdev_lock(dev);
ret = __net_mp_open_rxq(dev, ifq_idx, p);
- rtnl_unlock();
+ netdev_unlock(dev);
return ret;
}
@@ -153,7 +148,7 @@ static void __net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx,
void net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx,
struct pp_memory_provider_params *old_p)
{
- rtnl_lock();
+ netdev_lock(dev);
__net_mp_close_rxq(dev, ifq_idx, old_p);
- rtnl_unlock();
+ netdev_unlock(dev);
}
--
2.48.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH net-next v1 4/4] net: drop rtnl_lock for queue_mgmt operations
2025-03-07 15:57 ` [PATCH net-next v1 4/4] net: drop rtnl_lock for queue_mgmt operations Stanislav Fomichev
@ 2025-03-07 23:39 ` Jakub Kicinski
2025-03-07 23:50 ` Stanislav Fomichev
0 siblings, 1 reply; 19+ messages in thread
From: Jakub Kicinski @ 2025-03-07 23:39 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: netdev, davem, edumazet, pabeni, linux-kernel, horms,
donald.hunter, michael.chan, pavan.chebbi, andrew+netdev, jdamato,
xuanzhuo, almasrymina, asml.silence, dw
On Fri, 7 Mar 2025 07:57:25 -0800 Stanislav Fomichev wrote:
> All drivers that use queue API are already converted to use
> netdev instance lock. Move netdev instance lock management to
> the netlink layer and drop rtnl_lock.
> @@ -860,12 +854,11 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> }
>
> mutex_lock(&priv->lock);
> - rtnl_lock();
>
> - netdev = __dev_get_by_index(genl_info_net(info), ifindex);
> + netdev = netdev_get_by_index_lock(genl_info_net(info), ifindex);
> if (!netdev || !netif_device_present(netdev)) {
> err = -ENODEV;
> - goto err_unlock;
> + goto err_unlock_sock;
> }
>
> if (dev_xdp_prog_count(netdev)) {
> @@ -918,14 +911,15 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> if (err)
> goto err_unbind;
>
> - rtnl_unlock();
> + netdev_unlock(netdev);
Ah, here's the unlock :)
Looks good for the devmem binding, I think, the other functions will
need a bit more careful handling. So perhaps drop the queue get changes?
I'm cooking some patches for the queue get and queue stats.
AFAIU we need helpers which will go over netdevs and either take rtnl
lock or instance lock, depending on whether the driver is "ops locked"
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH net-next v1 4/4] net: drop rtnl_lock for queue_mgmt operations
2025-03-07 23:39 ` Jakub Kicinski
@ 2025-03-07 23:50 ` Stanislav Fomichev
0 siblings, 0 replies; 19+ messages in thread
From: Stanislav Fomichev @ 2025-03-07 23:50 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Stanislav Fomichev, netdev, davem, edumazet, pabeni, linux-kernel,
horms, donald.hunter, michael.chan, pavan.chebbi, andrew+netdev,
jdamato, xuanzhuo, almasrymina, asml.silence, dw
On 03/07, Jakub Kicinski wrote:
> On Fri, 7 Mar 2025 07:57:25 -0800 Stanislav Fomichev wrote:
> > All drivers that use queue API are already converted to use
> > netdev instance lock. Move netdev instance lock management to
> > the netlink layer and drop rtnl_lock.
>
> > @@ -860,12 +854,11 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> > }
> >
> > mutex_lock(&priv->lock);
> > - rtnl_lock();
> >
> > - netdev = __dev_get_by_index(genl_info_net(info), ifindex);
> > + netdev = netdev_get_by_index_lock(genl_info_net(info), ifindex);
> > if (!netdev || !netif_device_present(netdev)) {
> > err = -ENODEV;
> > - goto err_unlock;
> > + goto err_unlock_sock;
> > }
> >
> > if (dev_xdp_prog_count(netdev)) {
> > @@ -918,14 +911,15 @@ int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info)
> > if (err)
> > goto err_unbind;
> >
> > - rtnl_unlock();
> > + netdev_unlock(netdev);
>
> Ah, here's the unlock :)
mutex_unlock(&priv->lock) is still missing :(
> Looks good for the devmem binding, I think, the other functions will
> need a bit more careful handling. So perhaps drop the queue get changes?
> I'm cooking some patches for the queue get and queue stats.
> AFAIU we need helpers which will go over netdevs and either take rtnl
> lock or instance lock, depending on whether the driver is "ops locked"
Here is what I was tentatively playing with (rtnl_netdev_{,un}lock_ops
abomination):
https://github.com/fomichev/linux/commit/f791a23c358c7db0e798bc4181dc6c243c8ff77d
Which sort of does what you're suggesting in:
https://github.com/fomichev/linux/commit/392ae1f3ca823dc412a2dac2263b6c8355f6925d
Although I'm still unconditionally holding rtnl_lock during
for_each_netdev_dump..
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH net-next v1 0/4] net: remove rtnl_lock from the callers of queue APIs
2025-03-07 15:57 [PATCH net-next v1 0/4] net: remove rtnl_lock from the callers of queue APIs Stanislav Fomichev
` (3 preceding siblings ...)
2025-03-07 15:57 ` [PATCH net-next v1 4/4] net: drop rtnl_lock for queue_mgmt operations Stanislav Fomichev
@ 2025-03-08 3:22 ` Jakub Kicinski
2025-03-08 3:33 ` Stanislav Fomichev
4 siblings, 1 reply; 19+ messages in thread
From: Jakub Kicinski @ 2025-03-08 3:22 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: netdev, davem, edumazet, pabeni, linux-kernel, horms,
donald.hunter, michael.chan, pavan.chebbi, andrew+netdev, jdamato,
xuanzhuo, almasrymina, asml.silence, dw
On Fri, 7 Mar 2025 07:57:21 -0800 Stanislav Fomichev wrote:
> All drivers that use queue management APIs already depend on the netdev
> lock. Ultimately, we want to have most of the paths that work with
> specific netdev to be rtnl_lock-free (ethtool mostly in particular).
> Queue API currently has a much smaller API surface, so start with
> rtnl_lock from it:
>
> - add mutex to each dmabuf binding (to replace rtnl_lock)
> - protect global net_devmem_dmabuf_bindings with a new (global) lock
> - move netdev lock management to the callers of netdev_rx_queue_restart
> and drop rtnl_lock
One more note, looks like this silently conflicts with my:
https://lore.kernel.org/all/20250307183006.2312761-1-kuba@kernel.org/
You need to add:
#include <net/netdev_lock.h>
to net/core/netdev_rx_queue.c, otherwise the series together break
the build.
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH net-next v1 0/4] net: remove rtnl_lock from the callers of queue APIs
2025-03-08 3:22 ` [PATCH net-next v1 0/4] net: remove rtnl_lock from the callers of queue APIs Jakub Kicinski
@ 2025-03-08 3:33 ` Stanislav Fomichev
0 siblings, 0 replies; 19+ messages in thread
From: Stanislav Fomichev @ 2025-03-08 3:33 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Stanislav Fomichev, netdev, davem, edumazet, pabeni, linux-kernel,
horms, donald.hunter, michael.chan, pavan.chebbi, andrew+netdev,
jdamato, xuanzhuo, almasrymina, asml.silence, dw
On 03/07, Jakub Kicinski wrote:
> On Fri, 7 Mar 2025 07:57:21 -0800 Stanislav Fomichev wrote:
> > All drivers that use queue management APIs already depend on the netdev
> > lock. Ultimately, we want to have most of the paths that work with
> > specific netdev to be rtnl_lock-free (ethtool mostly in particular).
> > Queue API currently has a much smaller API surface, so start with
> > rtnl_lock from it:
> >
> > - add mutex to each dmabuf binding (to replace rtnl_lock)
> > - protect global net_devmem_dmabuf_bindings with a new (global) lock
> > - move netdev lock management to the callers of netdev_rx_queue_restart
> > and drop rtnl_lock
>
> One more note, looks like this silently conflicts with my:
> https://lore.kernel.org/all/20250307183006.2312761-1-kuba@kernel.org/
>
> You need to add:
>
> #include <net/netdev_lock.h>
>
> to net/core/netdev_rx_queue.c, otherwise the series together break
> the build.
Noted, thanks!
^ permalink raw reply [flat|nested] 19+ messages in thread