Linux Netfilter development
 help / color / mirror / Atom feed
* [PATCH nf-next] netfilter: nf_tables: use dedicated mutex for reset operations
@ 2026-01-27  3:06 Brian Witte
  2026-01-28  0:09 ` Florian Westphal
  0 siblings, 1 reply; 7+ messages in thread
From: Brian Witte @ 2026-01-27  3:06 UTC (permalink / raw)
  To: netfilter-devel; +Cc: pablo, fw, kadlec

Add a dedicated reset_mutex to serialize reset operations instead of
reusing the commit_mutex. This fixes a circular locking dependency
between commit_mutex, nfnl_subsys_ipset, and nlk_cb_mutex-NETFILTER
that could lead to deadlock when nft reset, ipset list, and
iptables-nft with set match run concurrently:

  CPU0 (nft reset):        nlk_cb_mutex -> commit_mutex
  CPU1 (ipset list):       nfnl_subsys_ipset -> nlk_cb_mutex
  CPU2 (iptables -m set):  commit_mutex -> nfnl_subsys_ipset

The reset_mutex only serializes concurrent reset operations to prevent
counter underruns, which is all that's needed. Breaking the commit_mutex
dependency in the dump-reset path eliminates the circular lock chain.

Reported-by: syzbot+ff16b505ec9152e5f448@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=ff16b505ec9152e5f448
Signed-off-by: Brian Witte <brianwitte@mailfence.com>
---
 include/net/netfilter/nf_tables.h |  1 +
 net/netfilter/nf_tables_api.c     | 30 +++++++++++++++---------------
 2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
index 31906f90706e..85cdd93e564b 100644
--- a/include/net/netfilter/nf_tables.h
+++ b/include/net/netfilter/nf_tables.h
@@ -1931,6 +1931,7 @@ struct nftables_pernet {
 	struct list_head	module_list;
 	struct list_head	notify_list;
 	struct mutex		commit_mutex;
+	struct mutex		reset_mutex;
 	u64			table_handle;
 	u64			tstamp;
 	unsigned int		gc_seq;
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index be4924aeaf0e..c82b7875c49c 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -3907,13 +3907,12 @@ static int nf_tables_dumpreset_rules(struct sk_buff *skb,
 	struct nftables_pernet *nft_net = nft_pernet(sock_net(skb->sk));
 	int ret;
 
-	/* Mutex is held is to prevent that two concurrent dump-and-reset calls
-	 * do not underrun counters and quotas. The commit_mutex is used for
-	 * the lack a better lock, this is not transaction path.
+	/* Mutex is held to prevent that two concurrent dump-and-reset calls
+	 * do not underrun counters and quotas.
 	 */
-	mutex_lock(&nft_net->commit_mutex);
+	mutex_lock(&nft_net->reset_mutex);
 	ret = nf_tables_dump_rules(skb, cb);
-	mutex_unlock(&nft_net->commit_mutex);
+	mutex_unlock(&nft_net->reset_mutex);
 
 	return ret;
 }
@@ -4057,9 +4056,9 @@ static int nf_tables_getrule_reset(struct sk_buff *skb,
 	if (!try_module_get(THIS_MODULE))
 		return -EINVAL;
 	rcu_read_unlock();
-	mutex_lock(&nft_net->commit_mutex);
+	mutex_lock(&nft_net->reset_mutex);
 	skb2 = nf_tables_getrule_single(portid, info, nla, true);
-	mutex_unlock(&nft_net->commit_mutex);
+	mutex_unlock(&nft_net->reset_mutex);
 	rcu_read_lock();
 	module_put(THIS_MODULE);
 
@@ -6346,7 +6345,7 @@ static int nf_tables_dumpreset_set(struct sk_buff *skb,
 	struct nft_set_dump_ctx *dump_ctx = cb->data;
 	int ret, skip = cb->args[0];
 
-	mutex_lock(&nft_net->commit_mutex);
+	mutex_lock(&nft_net->reset_mutex);
 
 	ret = nf_tables_dump_set(skb, cb);
 
@@ -6354,7 +6353,7 @@ static int nf_tables_dumpreset_set(struct sk_buff *skb,
 		audit_log_nft_set_reset(dump_ctx->ctx.table, cb->seq,
 					cb->args[0] - skip);
 
-	mutex_unlock(&nft_net->commit_mutex);
+	mutex_unlock(&nft_net->reset_mutex);
 
 	return ret;
 }
@@ -6671,7 +6670,7 @@ static int nf_tables_getsetelem_reset(struct sk_buff *skb,
 	if (!try_module_get(THIS_MODULE))
 		return -EINVAL;
 	rcu_read_unlock();
-	mutex_lock(&nft_net->commit_mutex);
+	mutex_lock(&nft_net->reset_mutex);
 	rcu_read_lock();
 
 	err = nft_set_dump_ctx_init(&dump_ctx, skb, info, nla, true);
@@ -6690,7 +6689,7 @@ static int nf_tables_getsetelem_reset(struct sk_buff *skb,
 
 out_unlock:
 	rcu_read_unlock();
-	mutex_unlock(&nft_net->commit_mutex);
+	mutex_unlock(&nft_net->reset_mutex);
 	rcu_read_lock();
 	module_put(THIS_MODULE);
 
@@ -8552,9 +8551,9 @@ static int nf_tables_dumpreset_obj(struct sk_buff *skb,
 	struct nftables_pernet *nft_net = nft_pernet(sock_net(skb->sk));
 	int ret;
 
-	mutex_lock(&nft_net->commit_mutex);
+	mutex_lock(&nft_net->reset_mutex);
 	ret = nf_tables_dump_obj(skb, cb);
-	mutex_unlock(&nft_net->commit_mutex);
+	mutex_unlock(&nft_net->reset_mutex);
 
 	return ret;
 }
@@ -8693,9 +8692,9 @@ static int nf_tables_getobj_reset(struct sk_buff *skb,
 	if (!try_module_get(THIS_MODULE))
 		return -EINVAL;
 	rcu_read_unlock();
-	mutex_lock(&nft_net->commit_mutex);
+	mutex_lock(&nft_net->reset_mutex);
 	skb2 = nf_tables_getobj_single(portid, info, nla, true);
-	mutex_unlock(&nft_net->commit_mutex);
+	mutex_unlock(&nft_net->reset_mutex);
 	rcu_read_lock();
 	module_put(THIS_MODULE);
 
@@ -12194,6 +12193,7 @@ static int __net_init nf_tables_init_net(struct net *net)
 	INIT_LIST_HEAD(&nft_net->module_list);
 	INIT_LIST_HEAD(&nft_net->notify_list);
 	mutex_init(&nft_net->commit_mutex);
+	mutex_init(&nft_net->reset_mutex);
 	net->nft.base_seq = 1;
 	nft_net->gc_seq = 0;
 	nft_net->validate_state = NFT_VALIDATE_SKIP;
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH nf-next] netfilter: nf_tables: use dedicated mutex for reset operations
  2026-01-27  3:06 [PATCH nf-next] netfilter: nf_tables: use dedicated mutex for reset operations Brian Witte
@ 2026-01-28  0:09 ` Florian Westphal
  2026-01-30  1:56   ` Brian Witte
  2026-02-02 23:01   ` Pablo Neira Ayuso
  0 siblings, 2 replies; 7+ messages in thread
From: Florian Westphal @ 2026-01-28  0:09 UTC (permalink / raw)
  To: Brian Witte; +Cc: netfilter-devel, pablo, kadlec

Brian Witte <brianwitte@mailfence.com> wrote:
> Add a dedicated reset_mutex to serialize reset operations instead of
> reusing the commit_mutex. This fixes a circular locking dependency
> between commit_mutex, nfnl_subsys_ipset, and nlk_cb_mutex-NETFILTER
> that could lead to deadlock when nft reset, ipset list, and
> iptables-nft with set match run concurrently:
> 
>   CPU0 (nft reset):        nlk_cb_mutex -> commit_mutex
>   CPU1 (ipset list):       nfnl_subsys_ipset -> nlk_cb_mutex
>   CPU2 (iptables -m set):  commit_mutex -> nfnl_subsys_ipset
> 
> The reset_mutex only serializes concurrent reset operations to prevent
> counter underruns, which is all that's needed. Breaking the commit_mutex
> dependency in the dump-reset path eliminates the circular lock chain.
> 
> Reported-by: syzbot+ff16b505ec9152e5f448@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=ff16b505ec9152e5f448
> Signed-off-by: Brian Witte <brianwitte@mailfence.com>

This needs more work:

-----------------------------
net/netfilter/nf_tables_api.c:1002 RCU-list traversed in non-reader section!!

other info that might help us debug this:

rcu_scheduler_active = 2, debug_locks = 1
1 lock held by nft/17539:
 #0: ffff888132018368 (&nft_net->reset_mutex){+.+.}-{4:4}, at: nf_tables_getobj_reset+0x19e/0x5a0 [nf_tables]

stack backtrace:
CPU: 4 UID: 0 PID: 17539 Comm: nft Not tainted 6.19.0-rc6+ #9 PREEMPT(full)
Call Trace:
 lockdep_rcu_suspicious.cold+0x4f/0xb1
 nft_table_lookup.part.0+0x1e7/0x220 [nf_tables]
 nf_tables_getobj_single+0x196/0x5a0 [nf_tables]
 nf_tables_getobj_reset+0x1b1/0x5a0 [nf_tables]
 nfnetlink_rcv_msg+0x49e/0xf00

Please run nftables.git tests/shell/run-tests.sh with

CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y
CONFIG_PROVE_RCU_LIST=y

This warning is not a false positive, the list traversal was
fine for reset case because we held the transaction mutex.

Now that we don't, we need to hold rcu_read_lock().

Maybe its worth investigating if we should instead protect
only the reset action itself, i.e. add private reset spinlocks
in nft_quota_do_dump() et al?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH nf-next] netfilter: nf_tables: use dedicated mutex for reset operations
  2026-01-28  0:09 ` Florian Westphal
@ 2026-01-30  1:56   ` Brian Witte
  2026-01-30 11:51     ` Florian Westphal
  2026-02-02 23:01   ` Pablo Neira Ayuso
  1 sibling, 1 reply; 7+ messages in thread
From: Brian Witte @ 2026-01-30  1:56 UTC (permalink / raw)
  To: fw; +Cc: kadlec, netfilter-devel, pablo

Florian Westphal <fw@strlen.de> wrote:
> Maybe its worth investigating if we should instead protect
> only the reset action itself, i.e. add private reset spinlocks
> in nft_quota_do_dump() et al?

Thanks for the suggestion. Implemented per-object spinlocks as proposed.
Sending inline rather than v2 since I'm not certain about the approach.

Ran tests/shell/run-tests.sh with PROVE_LOCKING, PROVE_RCU, and
PROVE_RCU_LIST enabled - no warnings.

Uses static lock class keys to avoid lockdep exhaustion with many objects.

Two questions:

1. Should this be spin_lock_bh()? I think plain spin_lock() is fine
   since the packet path doesn't take this lock.

2. The nf_tables_api.c changes also remove the try_module_get/module_put
   and rcu_read_unlock/rcu_read_lock dance - that was only needed because
   mutex_lock can sleep and we couldn't hold RCU across it. Since
   spin_lock doesn't sleep, we stay under RCU the entire time. Please
   confirm this is correct.

---
 net/netfilter/nf_tables_api.c | 60 ++---------------------------------
 net/netfilter/nft_counter.c   | 18 ++++++++++-
 net/netfilter/nft_quota.c     | 29 ++++++++++++++---
 3 files changed, 45 insertions(+), 62 deletions(-)

diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index be4924aeaf0e..11f2e467081e 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -3904,18 +3904,7 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
 static int nf_tables_dumpreset_rules(struct sk_buff *skb,
 				    struct netlink_callback *cb)
 {
-	struct nftables_pernet *nft_net = nft_pernet(sock_net(skb->sk));
-	int ret;
-
-	/* Mutex is held is to prevent that two concurrent dump-and-reset calls
-	* do not underrun counters and quotas. The commit_mutex is used for
-	* the lack a better lock, this is not transaction path.
-	*/
-	mutex_lock(&nft_net->commit_mutex);
-	ret = nf_tables_dump_rules(skb, cb);
-	mutex_unlock(&nft_net->commit_mutex);
-
-	return ret;
+	return nf_tables_dump_rules(skb, cb);
 }

 static int nf_tables_dump_rules_start(struct netlink_callback *cb)
@@ -4036,7 +4025,6 @@ static int nf_tables_getrule_reset(struct sk_buff *skb,
 				  const struct nfnl_info *info,
 				  const struct nlattr * const nla[])
 {
-	struct nftables_pernet *nft_net = nft_pernet(info->net);
 	u32 portid = NETLINK_CB(skb).portid;
 	struct net *net = info->net;
 	struct sk_buff *skb2;
@@ -4054,15 +4042,7 @@ static int nf_tables_getrule_reset(struct sk_buff *skb,
 		return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c);
 	}

-	if (!try_module_get(THIS_MODULE))
-		return -EINVAL;
-	rcu_read_unlock();
-	mutex_lock(&nft_net->commit_mutex);
 	skb2 = nf_tables_getrule_single(portid, info, nla, true);
-	mutex_unlock(&nft_net->commit_mutex);
-	rcu_read_lock();
-	module_put(THIS_MODULE);
-
 	if (IS_ERR(skb2))
 		return PTR_ERR(skb2);

@@ -6342,20 +6322,15 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
 static int nf_tables_dumpreset_set(struct sk_buff *skb,
 				  struct netlink_callback *cb)
 {
-	struct nftables_pernet *nft_net = nft_pernet(sock_net(skb->sk));
 	struct nft_set_dump_ctx *dump_ctx = cb->data;
 	int ret, skip = cb->args[0];

-	mutex_lock(&nft_net->commit_mutex);
-
 	ret = nf_tables_dump_set(skb, cb);

 	if (cb->args[0] > skip)
 		audit_log_nft_set_reset(dump_ctx->ctx.table, cb->seq,
 					cb->args[0] - skip);

-	mutex_unlock(&nft_net->commit_mutex);
-
 	return ret;
 }

@@ -6643,7 +6618,6 @@ static int nf_tables_getsetelem_reset(struct sk_buff *skb,
 				     const struct nfnl_info *info,
 				     const struct nlattr * const nla[])
 {
-	struct nftables_pernet *nft_net = nft_pernet(info->net);
 	struct netlink_ext_ack *extack = info->extack;
 	struct nft_set_dump_ctx dump_ctx;
 	int rem, err = 0, nelems = 0;
@@ -6668,15 +6642,9 @@ static int nf_tables_getsetelem_reset(struct sk_buff *skb,
 	if (!nla[NFTA_SET_ELEM_LIST_ELEMENTS])
 		return -EINVAL;

-	if (!try_module_get(THIS_MODULE))
-		return -EINVAL;
-	rcu_read_unlock();
-	mutex_lock(&nft_net->commit_mutex);
-	rcu_read_lock();
-
 	err = nft_set_dump_ctx_init(&dump_ctx, skb, info, nla, true);
 	if (err)
-		goto out_unlock;
+		return err;

 	nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
 		err = nft_get_set_elem(&dump_ctx.ctx, dump_ctx.set, attr, true);
@@ -6688,12 +6656,6 @@ static int nf_tables_getsetelem_reset(struct sk_buff *skb,
 	}
 	audit_log_nft_set_reset(dump_ctx.ctx.table, nft_base_seq(info->net), nelems);

-out_unlock:
-	rcu_read_unlock();
-	mutex_unlock(&nft_net->commit_mutex);
-	rcu_read_lock();
-	module_put(THIS_MODULE);
-
 	return err;
 }

@@ -8549,14 +8511,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
 static int nf_tables_dumpreset_obj(struct sk_buff *skb,
 				  struct netlink_callback *cb)
 {
-	struct nftables_pernet *nft_net = nft_pernet(sock_net(skb->sk));
-	int ret;
-
-	mutex_lock(&nft_net->commit_mutex);
-	ret = nf_tables_dump_obj(skb, cb);
-	mutex_unlock(&nft_net->commit_mutex);
-
-	return ret;
+	return nf_tables_dump_obj(skb, cb);
 }

 static int nf_tables_dump_obj_start(struct netlink_callback *cb)
@@ -8672,7 +8627,6 @@ static int nf_tables_getobj_reset(struct sk_buff *skb,
 				 const struct nfnl_info *info,
 				 const struct nlattr * const nla[])
 {
-	struct nftables_pernet *nft_net = nft_pernet(info->net);
 	u32 portid = NETLINK_CB(skb).portid;
 	struct net *net = info->net;
 	struct sk_buff *skb2;
@@ -8690,15 +8644,7 @@ static int nf_tables_getobj_reset(struct sk_buff *skb,
 		return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c);
 	}

-	if (!try_module_get(THIS_MODULE))
-		return -EINVAL;
-	rcu_read_unlock();
-	mutex_lock(&nft_net->commit_mutex);
 	skb2 = nf_tables_getobj_single(portid, info, nla, true);
-	mutex_unlock(&nft_net->commit_mutex);
-	rcu_read_lock();
-	module_put(THIS_MODULE);
-
 	if (IS_ERR(skb2))
 		return PTR_ERR(skb2);

diff --git a/net/netfilter/nft_counter.c b/net/netfilter/nft_counter.c
index cc7325329496..ae3c339cbcee 100644
--- a/net/netfilter/nft_counter.c
+++ b/net/netfilter/nft_counter.c
@@ -28,10 +28,13 @@ struct nft_counter_tot {

 struct nft_counter_percpu_priv {
 	struct nft_counter __percpu *counter;
+	spinlock_t	reset_lock;	/* protects concurrent reset */
 };

 static DEFINE_PER_CPU(struct u64_stats_sync, nft_counter_sync);

+static struct lock_class_key nft_counter_reset_key;
+
 static inline void nft_counter_do_eval(struct nft_counter_percpu_priv *priv,
 				      struct nft_regs *regs,
 				      const struct nft_pktinfo *pkt)
@@ -81,6 +84,9 @@ static int nft_counter_do_init(const struct nlattr * const tb[],
 	}

 	priv->counter = cpu_stats;
+	spin_lock_init(&priv->reset_lock);
+	lockdep_set_class(&priv->reset_lock, &nft_counter_reset_key);
+
 	return 0;
 }

@@ -154,6 +160,9 @@ static int nft_counter_do_dump(struct sk_buff *skb,
 {
 	struct nft_counter_tot total;

+	if (reset)
+		spin_lock(&priv->reset_lock);
+
 	nft_counter_fetch(priv, &total);

 	if (nla_put_be64(skb, NFTA_COUNTER_BYTES, cpu_to_be64(total.bytes),
@@ -162,12 +171,16 @@ static int nft_counter_do_dump(struct sk_buff *skb,
 			NFTA_COUNTER_PAD))
 		goto nla_put_failure;

-	if (reset)
+	if (reset) {
 		nft_counter_reset(priv, &total);
+		spin_unlock(&priv->reset_lock);
+	}

 	return 0;

 nla_put_failure:
+	if (reset)
+		spin_unlock(&priv->reset_lock);
 	return -1;
 }

@@ -254,6 +267,9 @@ static int nft_counter_clone(struct nft_expr *dst, const struct nft_expr *src, g
 	u64_stats_set(&this_cpu->bytes, total.bytes);

 	priv_clone->counter = cpu_stats;
+	spin_lock_init(&priv_clone->reset_lock);
+	lockdep_set_class(&priv_clone->reset_lock, &nft_counter_reset_key);
+
 	return 0;
 }

diff --git a/net/netfilter/nft_quota.c b/net/netfilter/nft_quota.c
index df0798da2329..a66e06cdb3a9 100644
--- a/net/netfilter/nft_quota.c
+++ b/net/netfilter/nft_quota.c
@@ -16,8 +16,11 @@ struct nft_quota {
 	atomic64_t	quota;
 	unsigned long	flags;
 	atomic64_t	*consumed;
+	spinlock_t	reset_lock;	/* protects concurrent reset */
 };

+static struct lock_class_key nft_quota_reset_key;
+
 static inline bool nft_overquota(struct nft_quota *priv,
 				const struct sk_buff *skb,
 				bool *report)
@@ -103,6 +106,8 @@ static int nft_quota_do_init(const struct nlattr * const tb[],
 	atomic64_set(&priv->quota, quota);
 	priv->flags = flags;
 	atomic64_set(priv->consumed, consumed);
+	spin_lock_init(&priv->reset_lock);
+	lockdep_set_class(&priv->reset_lock, &nft_quota_reset_key);

 	return 0;
 }
@@ -134,13 +139,24 @@ static void nft_quota_obj_update(struct nft_object *obj,
 	priv->flags = newpriv->flags;
 }

+static void nft_quota_reset(struct nft_quota *priv, u64 consumed)
+{
+	atomic64_sub(consumed, priv->consumed);
+	clear_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags);
+}
+
 static int nft_quota_do_dump(struct sk_buff *skb, struct nft_quota *priv,
 			    bool reset)
 {
 	u64 consumed, consumed_cap, quota;
-	u32 flags = priv->flags;
+	u32 flags;
+
+	if (reset)
+		spin_lock(&priv->reset_lock);
+
+	flags = priv->flags;

-	/* Since we inconditionally increment consumed quota for each packet
+	/* Since we unconditionally increment consumed quota for each packet
 	* that we see, don't go over the quota boundary in what we send to
 	* userspace.
 	*/
@@ -161,12 +177,15 @@ static int nft_quota_do_dump(struct sk_buff *skb, struct nft_quota *priv,
 		goto nla_put_failure;

 	if (reset) {
-		atomic64_sub(consumed, priv->consumed);
-		clear_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags);
+		nft_quota_reset(priv, consumed);
+		spin_unlock(&priv->reset_lock);
 	}
+
 	return 0;

 nla_put_failure:
+	if (reset)
+		spin_unlock(&priv->reset_lock);
 	return -1;
 }

@@ -252,6 +271,8 @@ static int nft_quota_clone(struct nft_expr *dst, const struct nft_expr *src, gfp
 		return -ENOMEM;

 	*priv_dst->consumed = *priv_src->consumed;
+	spin_lock_init(&priv_dst->reset_lock);
+	lockdep_set_class(&priv_dst->reset_lock, &nft_quota_reset_key);

 	return 0;
 }
--
2.47.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH nf-next] netfilter: nf_tables: use dedicated mutex for reset operations
  2026-01-30  1:56   ` Brian Witte
@ 2026-01-30 11:51     ` Florian Westphal
  0 siblings, 0 replies; 7+ messages in thread
From: Florian Westphal @ 2026-01-30 11:51 UTC (permalink / raw)
  To: Brian Witte; +Cc: kadlec, netfilter-devel, pablo

Brian Witte <brianwitte@mailfence.com> wrote:
> Florian Westphal <fw@strlen.de> wrote:
> > Maybe its worth investigating if we should instead protect
> > only the reset action itself, i.e. add private reset spinlocks
> > in nft_quota_do_dump() et al?
> 
> Thanks for the suggestion. Implemented per-object spinlocks as proposed.
> Sending inline rather than v2 since I'm not certain about the approach.
> 
> Ran tests/shell/run-tests.sh with PROVE_LOCKING, PROVE_RCU, and
> PROVE_RCU_LIST enabled - no warnings.
> 
> Uses static lock class keys to avoid lockdep exhaustion with many objects.
> 
> Two questions:
> 
> 1. Should this be spin_lock_bh()? I think plain spin_lock() is fine
>    since the packet path doesn't take this lock.

Its fine if we're interrupted while holding this lock, no (soft)irq
grabs it.

> 2. The nf_tables_api.c changes also remove the try_module_get/module_put
>    and rcu_read_unlock/rcu_read_lock dance - that was only needed because
>    mutex_lock can sleep and we couldn't hold RCU across it. Since
>    spin_lock doesn't sleep, we stay under RCU the entire time. Please
>    confirm this is correct.

Yes, this dance isn't needed anymore.

> diff --git a/net/netfilter/nft_counter.c b/net/netfilter/nft_counter.c
> index cc7325329496..ae3c339cbcee 100644
> --- a/net/netfilter/nft_counter.c
> +++ b/net/netfilter/nft_counter.c
> @@ -28,10 +28,13 @@ struct nft_counter_tot {
> 
>  struct nft_counter_percpu_priv {
>  	struct nft_counter __percpu *counter;
> +	spinlock_t	reset_lock;	/* protects concurrent reset */
>  };

I don't think we need per-object granularity;
a single spinlock in nft_pernet area is enough for this.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH nf-next] netfilter: nf_tables: use dedicated mutex for reset operations
  2026-01-28  0:09 ` Florian Westphal
  2026-01-30  1:56   ` Brian Witte
@ 2026-02-02 23:01   ` Pablo Neira Ayuso
  2026-02-02 23:06     ` Florian Westphal
  1 sibling, 1 reply; 7+ messages in thread
From: Pablo Neira Ayuso @ 2026-02-02 23:01 UTC (permalink / raw)
  To: Florian Westphal; +Cc: Brian Witte, netfilter-devel, kadlec

On Wed, Jan 28, 2026 at 01:09:10AM +0100, Florian Westphal wrote:
> Brian Witte <brianwitte@mailfence.com> wrote:
> > Add a dedicated reset_mutex to serialize reset operations instead of
> > reusing the commit_mutex. This fixes a circular locking dependency
> > between commit_mutex, nfnl_subsys_ipset, and nlk_cb_mutex-NETFILTER
> > that could lead to deadlock when nft reset, ipset list, and
> > iptables-nft with set match run concurrently:
> > 
> >   CPU0 (nft reset):        nlk_cb_mutex -> commit_mutex
> >   CPU1 (ipset list):       nfnl_subsys_ipset -> nlk_cb_mutex
> >   CPU2 (iptables -m set):  commit_mutex -> nfnl_subsys_ipset
> > 
> > The reset_mutex only serializes concurrent reset operations to prevent
> > counter underruns, which is all that's needed. Breaking the commit_mutex
> > dependency in the dump-reset path eliminates the circular lock chain.
> > 
> > Reported-by: syzbot+ff16b505ec9152e5f448@syzkaller.appspotmail.com
> > Closes: https://syzkaller.appspot.com/bug?extid=ff16b505ec9152e5f448
> > Signed-off-by: Brian Witte <brianwitte@mailfence.com>
> 
> This needs more work:
> 
> -----------------------------
> net/netfilter/nf_tables_api.c:1002 RCU-list traversed in non-reader section!!
> 
> other info that might help us debug this:
> 
> rcu_scheduler_active = 2, debug_locks = 1
> 1 lock held by nft/17539:
>  #0: ffff888132018368 (&nft_net->reset_mutex){+.+.}-{4:4}, at: nf_tables_getobj_reset+0x19e/0x5a0 [nf_tables]
> 
> stack backtrace:
> CPU: 4 UID: 0 PID: 17539 Comm: nft Not tainted 6.19.0-rc6+ #9 PREEMPT(full)
> Call Trace:
>  lockdep_rcu_suspicious.cold+0x4f/0xb1
>  nft_table_lookup.part.0+0x1e7/0x220 [nf_tables]
>  nf_tables_getobj_single+0x196/0x5a0 [nf_tables]
>  nf_tables_getobj_reset+0x1b1/0x5a0 [nf_tables]
>  nfnetlink_rcv_msg+0x49e/0xf00
> 
> Please run nftables.git tests/shell/run-tests.sh with
> 
> CONFIG_PROVE_LOCKING=y
> CONFIG_PROVE_RCU=y
> CONFIG_PROVE_RCU_LIST=y
> 
> This warning is not a false positive, the list traversal was
> fine for reset case because we held the transaction mutex.
> 
> Now that we don't, we need to hold rcu_read_lock().
> 
> Maybe its worth investigating if we should instead protect
> only the reset action itself, i.e. add private reset spinlocks
> in nft_quota_do_dump() et al?

Last time we discussed this:

- There was an attempt to make reset fully atomic (for the whole
  ruleset), which is not really possible because netlink dumps for a
  large ruleset might not fit into, not worth trying.

- Still, there could be two threads resetting the counters at the same
  time, and someone mentioned underrun is possible.

  Looking at last for nft_quota, it should be possible to use
  atomic64_xchg():

diff --git a/net/netfilter/nft_quota.c b/net/netfilter/nft_quota.c
index df0798da2329..4a501cc86192 100644
--- a/net/netfilter/nft_quota.c
+++ b/net/netfilter/nft_quota.c
@@ -144,7 +144,11 @@ static int nft_quota_do_dump(struct sk_buff *skb, struct nft_quota *priv,
         * that we see, don't go over the quota boundary in what we send to
         * userspace.
         */
-       consumed = atomic64_read(priv->consumed);
+       if (reset)
+               consumed = atomic64_xchg(priv->consumed, 0);
+       else
+               consumed = atomic64_read(priv->consumed);
+
        quota = atomic64_read(&priv->quota);
        if (consumed >= quota) {
                consumed_cap = quota;
@@ -160,10 +164,9 @@ static int nft_quota_do_dump(struct sk_buff *skb, struct nft_quota *priv,
            nla_put_be32(skb, NFTA_QUOTA_FLAGS, htonl(flags)))
                goto nla_put_failure;
 
-       if (reset) {
-               atomic64_sub(consumed, priv->consumed);
+       if (reset)
                clear_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags);
-       }
+
        return 0;
 
 nla_put_failure:

Note that priv->quota could be converted to use WRITE_ONCE/READ_ONCE
instead, because updates in the quota are very rare and only happening
from userspace (atomic64 is not needed).

Then, for nft_counter, it is a bit more complicated, maybe a per-netns
spinlock for counters is sufficient, to protect this
nft_counter_do_dump() when the reset flag is true.

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH nf-next] netfilter: nf_tables: use dedicated mutex for reset operations
  2026-02-02 23:01   ` Pablo Neira Ayuso
@ 2026-02-02 23:06     ` Florian Westphal
  2026-02-02 23:43       ` Pablo Neira Ayuso
  0 siblings, 1 reply; 7+ messages in thread
From: Florian Westphal @ 2026-02-02 23:06 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: Brian Witte, netfilter-devel, kadlec

Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> On Wed, Jan 28, 2026 at 01:09:10AM +0100, Florian Westphal wrote:
> > Brian Witte <brianwitte@mailfence.com> wrote:
> > Maybe its worth investigating if we should instead protect
> > only the reset action itself, i.e. add private reset spinlocks
> > in nft_quota_do_dump() et al?
> 
> Last time we discussed this:
> 
> - There was an attempt to make reset fully atomic (for the whole
>   ruleset), which is not really possible because netlink dumps for a
>   large ruleset might not fit into, not worth trying.
> 
> - Still, there could be two threads resetting the counters at the same
>   time, and someone mentioned underrun is possible.
> 
>   Looking at last for nft_quota, it should be possible to use
>   atomic64_xchg():

Yep, agree, some .dump callbacks can probably be reworked
to use atomic ops for the reset case.

> Then, for nft_counter, it is a bit more complicated, maybe a per-netns
> spinlock for counters is sufficient, to protect this
> nft_counter_do_dump() when the reset flag is true.

Yes, a per-netns spinlock for reset serialization inside the dumper
callbacks is what we discussed, I think its the way to go.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH nf-next] netfilter: nf_tables: use dedicated mutex for reset operations
  2026-02-02 23:06     ` Florian Westphal
@ 2026-02-02 23:43       ` Pablo Neira Ayuso
  0 siblings, 0 replies; 7+ messages in thread
From: Pablo Neira Ayuso @ 2026-02-02 23:43 UTC (permalink / raw)
  To: Florian Westphal; +Cc: Brian Witte, netfilter-devel, kadlec

On Tue, Feb 03, 2026 at 12:06:43AM +0100, Florian Westphal wrote:
> Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> > On Wed, Jan 28, 2026 at 01:09:10AM +0100, Florian Westphal wrote:
> > > Brian Witte <brianwitte@mailfence.com> wrote:
> > > Maybe its worth investigating if we should instead protect
> > > only the reset action itself, i.e. add private reset spinlocks
> > > in nft_quota_do_dump() et al?
> > 
> > Last time we discussed this:
> > 
> > - There was an attempt to make reset fully atomic (for the whole
> >   ruleset), which is not really possible because netlink dumps for a
> >   large ruleset might not fit into, not worth trying.
> > 
> > - Still, there could be two threads resetting the counters at the same
> >   time, and someone mentioned underrun is possible.
> > 
> >   Looking at last for nft_quota, it should be possible to use
> >   atomic64_xchg():
> 
> Yep, agree, some .dump callbacks can probably be reworked
> to use atomic ops for the reset case.

Only quota and counter regard the reset flag at this stage.

> > Then, for nft_counter, it is a bit more complicated, maybe a per-netns
> > spinlock for counters is sufficient, to protect this
> > nft_counter_do_dump() when the reset flag is true.
> 
> Yes, a per-netns spinlock for reset serialization inside the dumper
> callbacks is what we discussed, I think its the way to go.

OK.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-02-02 23:43 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-27  3:06 [PATCH nf-next] netfilter: nf_tables: use dedicated mutex for reset operations Brian Witte
2026-01-28  0:09 ` Florian Westphal
2026-01-30  1:56   ` Brian Witte
2026-01-30 11:51     ` Florian Westphal
2026-02-02 23:01   ` Pablo Neira Ayuso
2026-02-02 23:06     ` Florian Westphal
2026-02-02 23:43       ` Pablo Neira Ayuso

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox