linux-rt-devel.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH] netfilter: nft_ct: Use __refcount_inc() for per-CPU nft_ct_pcpu_template.
@ 2025-02-17 16:02 Sebastian Andrzej Siewior
  2025-02-17 16:15 ` Florian Westphal
  2025-03-05 21:54 ` Pablo Neira Ayuso
  0 siblings, 2 replies; 3+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-02-17 16:02 UTC (permalink / raw)
  To: netfilter-devel, coreteam, linux-rt-devel
  Cc: Pablo Neira Ayuso, Jozsef Kadlecsik, Thomas Gleixner

nft_ct_pcpu_template is a per-CPU variable and relies on disabled BH for its
locking. The refcounter is read and if its value is set to one then the
refcounter is incremented and variable is used - otherwise it is already
in use and left untouched.

Without per-CPU locking in local_bh_disable() on PREEMPT_RT the
read-then-increment operation is not atomic and therefore racy.

This can be avoided by using unconditionally __refcount_inc() which will
increment counter and return the old value as an atomic operation.
In case the returned counter is not one, the variable is in use and we
need to decrement counter. Otherwise we can use it.

Use __refcount_inc() instead of read and a conditional increment.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 net/netfilter/nft_ct.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
index 2e59aba681a13..d526e69a2a2b8 100644
--- a/net/netfilter/nft_ct.c
+++ b/net/netfilter/nft_ct.c
@@ -230,6 +230,7 @@ static void nft_ct_set_zone_eval(const struct nft_expr *expr,
 	enum ip_conntrack_info ctinfo;
 	u16 value = nft_reg_load16(&regs->data[priv->sreg]);
 	struct nf_conn *ct;
+	int oldcnt;
 
 	ct = nf_ct_get(skb, &ctinfo);
 	if (ct) /* already tracked */
@@ -250,10 +251,11 @@ static void nft_ct_set_zone_eval(const struct nft_expr *expr,
 
 	ct = this_cpu_read(nft_ct_pcpu_template);
 
-	if (likely(refcount_read(&ct->ct_general.use) == 1)) {
-		refcount_inc(&ct->ct_general.use);
+	__refcount_inc(&ct->ct_general.use, &oldcnt);
+	if (likely(oldcnt == 1)) {
 		nf_ct_zone_add(ct, &zone);
 	} else {
+		refcount_dec(&ct->ct_general.use);
 		/* previous skb got queued to userspace, allocate temporary
 		 * one until percpu template can be reused.
 		 */
-- 
2.47.2

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] netfilter: nft_ct: Use __refcount_inc() for per-CPU nft_ct_pcpu_template.
  2025-02-17 16:02 [PATCH] netfilter: nft_ct: Use __refcount_inc() for per-CPU nft_ct_pcpu_template Sebastian Andrzej Siewior
@ 2025-02-17 16:15 ` Florian Westphal
  2025-03-05 21:54 ` Pablo Neira Ayuso
  1 sibling, 0 replies; 3+ messages in thread
From: Florian Westphal @ 2025-02-17 16:15 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: netfilter-devel, coreteam, linux-rt-devel, Pablo Neira Ayuso,
	Jozsef Kadlecsik, Thomas Gleixner

Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote:
> nft_ct_pcpu_template is a per-CPU variable and relies on disabled BH for its
> locking. The refcounter is read and if its value is set to one then the
> refcounter is incremented and variable is used - otherwise it is already
> in use and left untouched.
> 
> Without per-CPU locking in local_bh_disable() on PREEMPT_RT the
> read-then-increment operation is not atomic and therefore racy.
> 
> This can be avoided by using unconditionally __refcount_inc() which will
> increment counter and return the old value as an atomic operation.
> In case the returned counter is not one, the variable is in use and we
> need to decrement counter. Otherwise we can use it.
> 
> Use __refcount_inc() instead of read and a conditional increment.

Reviewed-by: Florian Westphal <fw@strlen.de>
Fixes: edee4f1e9245 ("netfilter: nft_ct: add zone id set support")

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] netfilter: nft_ct: Use __refcount_inc() for per-CPU nft_ct_pcpu_template.
  2025-02-17 16:02 [PATCH] netfilter: nft_ct: Use __refcount_inc() for per-CPU nft_ct_pcpu_template Sebastian Andrzej Siewior
  2025-02-17 16:15 ` Florian Westphal
@ 2025-03-05 21:54 ` Pablo Neira Ayuso
  1 sibling, 0 replies; 3+ messages in thread
From: Pablo Neira Ayuso @ 2025-03-05 21:54 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: netfilter-devel, coreteam, linux-rt-devel, Jozsef Kadlecsik,
	Thomas Gleixner

On Mon, Feb 17, 2025 at 05:02:42PM +0100, Sebastian Andrzej Siewior wrote:
> nft_ct_pcpu_template is a per-CPU variable and relies on disabled BH for its
> locking. The refcounter is read and if its value is set to one then the
> refcounter is incremented and variable is used - otherwise it is already
> in use and left untouched.
> 
> Without per-CPU locking in local_bh_disable() on PREEMPT_RT the
> read-then-increment operation is not atomic and therefore racy.
> 
> This can be avoided by using unconditionally __refcount_inc() which will
> increment counter and return the old value as an atomic operation.
> In case the returned counter is not one, the variable is in use and we
> need to decrement counter. Otherwise we can use it.
> 
> Use __refcount_inc() instead of read and a conditional increment.

Applied nf.git, thanks and sorry for taking a while to collect this.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-03-05 21:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-17 16:02 [PATCH] netfilter: nft_ct: Use __refcount_inc() for per-CPU nft_ct_pcpu_template Sebastian Andrzej Siewior
2025-02-17 16:15 ` Florian Westphal
2025-03-05 21:54 ` Pablo Neira Ayuso

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).