* [PATCH v2 net 1/2] rfs: annotate lockless accesses to sk->sk_rxhash
2023-06-06 7:41 [PATCH v2 net 0/2] rfs: annotate lockless accesses Eric Dumazet
@ 2023-06-06 7:41 ` Eric Dumazet
2023-06-06 9:40 ` Simon Horman
2023-06-06 17:40 ` Kuniyuki Iwashima
2023-06-06 7:41 ` [PATCH v2 net 2/2] rfs: annotate lockless accesses to RFS sock flow table Eric Dumazet
2023-06-07 9:20 ` [PATCH v2 net 0/2] rfs: annotate lockless accesses patchwork-bot+netdevbpf
2 siblings, 2 replies; 8+ messages in thread
From: Eric Dumazet @ 2023-06-06 7:41 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, netdev, eric.dumazet, Eric Dumazet
Add READ_ONCE()/WRITE_ONCE() on accesses to sk->sk_rxhash.
This also prevents a (smart ?) compiler to remove the condition in:
if (sk->sk_rxhash != newval)
sk->sk_rxhash = newval;
We need the condition to avoid dirtying a shared cache line.
Fixes: fec5e652e58f ("rfs: Receive Flow Steering")
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/net/sock.h | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/include/net/sock.h b/include/net/sock.h
index b418425d7230c8cee81df34fcc66d771ea5085e9..6f428a7f356755e73852c0e0006f2eb533fc7f57 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1152,8 +1152,12 @@ static inline void sock_rps_record_flow(const struct sock *sk)
* OR an additional socket flag
* [1] : sk_state and sk_prot are in the same cache line.
*/
- if (sk->sk_state == TCP_ESTABLISHED)
- sock_rps_record_flow_hash(sk->sk_rxhash);
+ if (sk->sk_state == TCP_ESTABLISHED) {
+ /* This READ_ONCE() is paired with the WRITE_ONCE()
+ * from sock_rps_save_rxhash() and sock_rps_reset_rxhash().
+ */
+ sock_rps_record_flow_hash(READ_ONCE(sk->sk_rxhash));
+ }
}
#endif
}
@@ -1162,15 +1166,19 @@ static inline void sock_rps_save_rxhash(struct sock *sk,
const struct sk_buff *skb)
{
#ifdef CONFIG_RPS
- if (unlikely(sk->sk_rxhash != skb->hash))
- sk->sk_rxhash = skb->hash;
+ /* The following WRITE_ONCE() is paired with the READ_ONCE()
+ * here, and another one in sock_rps_record_flow().
+ */
+ if (unlikely(READ_ONCE(sk->sk_rxhash) != skb->hash))
+ WRITE_ONCE(sk->sk_rxhash, skb->hash);
#endif
}
static inline void sock_rps_reset_rxhash(struct sock *sk)
{
#ifdef CONFIG_RPS
- sk->sk_rxhash = 0;
+ /* Paired with READ_ONCE() in sock_rps_record_flow() */
+ WRITE_ONCE(sk->sk_rxhash, 0);
#endif
}
--
2.41.0.rc0.172.g3f132b7071-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH v2 net 1/2] rfs: annotate lockless accesses to sk->sk_rxhash
2023-06-06 7:41 ` [PATCH v2 net 1/2] rfs: annotate lockless accesses to sk->sk_rxhash Eric Dumazet
@ 2023-06-06 9:40 ` Simon Horman
2023-06-06 17:40 ` Kuniyuki Iwashima
1 sibling, 0 replies; 8+ messages in thread
From: Simon Horman @ 2023-06-06 9:40 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, netdev,
eric.dumazet
On Tue, Jun 06, 2023 at 07:41:14AM +0000, Eric Dumazet wrote:
> Add READ_ONCE()/WRITE_ONCE() on accesses to sk->sk_rxhash.
>
> This also prevents a (smart ?) compiler to remove the condition in:
>
> if (sk->sk_rxhash != newval)
> sk->sk_rxhash = newval;
>
> We need the condition to avoid dirtying a shared cache line.
>
> Fixes: fec5e652e58f ("rfs: Receive Flow Steering")
> Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v2 net 1/2] rfs: annotate lockless accesses to sk->sk_rxhash
2023-06-06 7:41 ` [PATCH v2 net 1/2] rfs: annotate lockless accesses to sk->sk_rxhash Eric Dumazet
2023-06-06 9:40 ` Simon Horman
@ 2023-06-06 17:40 ` Kuniyuki Iwashima
1 sibling, 0 replies; 8+ messages in thread
From: Kuniyuki Iwashima @ 2023-06-06 17:40 UTC (permalink / raw)
To: edumazet; +Cc: davem, eric.dumazet, kuba, netdev, pabeni, simon.horman, kuniyu
From: Eric Dumazet <edumazet@google.com>
Date: Tue, 6 Jun 2023 07:41:14 +0000
> Add READ_ONCE()/WRITE_ONCE() on accesses to sk->sk_rxhash.
>
> This also prevents a (smart ?) compiler to remove the condition in:
>
> if (sk->sk_rxhash != newval)
> sk->sk_rxhash = newval;
>
> We need the condition to avoid dirtying a shared cache line.
>
> Fixes: fec5e652e58f ("rfs: Receive Flow Steering")
> Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Thank you!
> ---
> include/net/sock.h | 18 +++++++++++++-----
> 1 file changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/include/net/sock.h b/include/net/sock.h
> index b418425d7230c8cee81df34fcc66d771ea5085e9..6f428a7f356755e73852c0e0006f2eb533fc7f57 100644
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -1152,8 +1152,12 @@ static inline void sock_rps_record_flow(const struct sock *sk)
> * OR an additional socket flag
> * [1] : sk_state and sk_prot are in the same cache line.
> */
> - if (sk->sk_state == TCP_ESTABLISHED)
> - sock_rps_record_flow_hash(sk->sk_rxhash);
> + if (sk->sk_state == TCP_ESTABLISHED) {
> + /* This READ_ONCE() is paired with the WRITE_ONCE()
> + * from sock_rps_save_rxhash() and sock_rps_reset_rxhash().
> + */
> + sock_rps_record_flow_hash(READ_ONCE(sk->sk_rxhash));
> + }
> }
> #endif
> }
> @@ -1162,15 +1166,19 @@ static inline void sock_rps_save_rxhash(struct sock *sk,
> const struct sk_buff *skb)
> {
> #ifdef CONFIG_RPS
> - if (unlikely(sk->sk_rxhash != skb->hash))
> - sk->sk_rxhash = skb->hash;
> + /* The following WRITE_ONCE() is paired with the READ_ONCE()
> + * here, and another one in sock_rps_record_flow().
> + */
> + if (unlikely(READ_ONCE(sk->sk_rxhash) != skb->hash))
> + WRITE_ONCE(sk->sk_rxhash, skb->hash);
> #endif
> }
>
> static inline void sock_rps_reset_rxhash(struct sock *sk)
> {
> #ifdef CONFIG_RPS
> - sk->sk_rxhash = 0;
> + /* Paired with READ_ONCE() in sock_rps_record_flow() */
> + WRITE_ONCE(sk->sk_rxhash, 0);
> #endif
> }
>
> --
> 2.41.0.rc0.172.g3f132b7071-goog
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 net 2/2] rfs: annotate lockless accesses to RFS sock flow table
2023-06-06 7:41 [PATCH v2 net 0/2] rfs: annotate lockless accesses Eric Dumazet
2023-06-06 7:41 ` [PATCH v2 net 1/2] rfs: annotate lockless accesses to sk->sk_rxhash Eric Dumazet
@ 2023-06-06 7:41 ` Eric Dumazet
2023-06-06 9:40 ` Simon Horman
2023-06-06 17:42 ` Kuniyuki Iwashima
2023-06-07 9:20 ` [PATCH v2 net 0/2] rfs: annotate lockless accesses patchwork-bot+netdevbpf
2 siblings, 2 replies; 8+ messages in thread
From: Eric Dumazet @ 2023-06-06 7:41 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, netdev, eric.dumazet, Eric Dumazet
Add READ_ONCE()/WRITE_ONCE() on accesses to the sock flow table.
This also prevents a (smart ?) compiler to remove the condition in:
if (table->ents[index] != newval)
table->ents[index] = newval;
We need the condition to avoid dirtying a shared cache line.
Fixes: fec5e652e58f ("rfs: Receive Flow Steering")
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/linux/netdevice.h | 7 +++++--
net/core/dev.c | 6 ++++--
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 08fbd4622ccf731daaee34ad99773d6dc2e82fa6..e6f22b7403d014a2cf4d81d931109a594ce1398e 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -768,8 +768,11 @@ static inline void rps_record_sock_flow(struct rps_sock_flow_table *table,
/* We only give a hint, preemption can change CPU under us */
val |= raw_smp_processor_id();
- if (table->ents[index] != val)
- table->ents[index] = val;
+ /* The following WRITE_ONCE() is paired with the READ_ONCE()
+ * here, and another one in get_rps_cpu().
+ */
+ if (READ_ONCE(table->ents[index]) != val)
+ WRITE_ONCE(table->ents[index], val);
}
}
diff --git a/net/core/dev.c b/net/core/dev.c
index b3c13e0419356b943e90b1f46dd7e035c6ec1a9c..1495f8aff288e944c8cab21297f244a6fcde752f 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4471,8 +4471,10 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
u32 next_cpu;
u32 ident;
- /* First check into global flow table if there is a match */
- ident = sock_flow_table->ents[hash & sock_flow_table->mask];
+ /* First check into global flow table if there is a match.
+ * This READ_ONCE() pairs with WRITE_ONCE() from rps_record_sock_flow().
+ */
+ ident = READ_ONCE(sock_flow_table->ents[hash & sock_flow_table->mask]);
if ((ident ^ hash) & ~rps_cpu_mask)
goto try_rps;
--
2.41.0.rc0.172.g3f132b7071-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH v2 net 2/2] rfs: annotate lockless accesses to RFS sock flow table
2023-06-06 7:41 ` [PATCH v2 net 2/2] rfs: annotate lockless accesses to RFS sock flow table Eric Dumazet
@ 2023-06-06 9:40 ` Simon Horman
2023-06-06 17:42 ` Kuniyuki Iwashima
1 sibling, 0 replies; 8+ messages in thread
From: Simon Horman @ 2023-06-06 9:40 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, netdev,
eric.dumazet
On Tue, Jun 06, 2023 at 07:41:15AM +0000, Eric Dumazet wrote:
> Add READ_ONCE()/WRITE_ONCE() on accesses to the sock flow table.
>
> This also prevents a (smart ?) compiler to remove the condition in:
>
> if (table->ents[index] != newval)
> table->ents[index] = newval;
>
> We need the condition to avoid dirtying a shared cache line.
>
> Fixes: fec5e652e58f ("rfs: Receive Flow Steering")
> Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH v2 net 2/2] rfs: annotate lockless accesses to RFS sock flow table
2023-06-06 7:41 ` [PATCH v2 net 2/2] rfs: annotate lockless accesses to RFS sock flow table Eric Dumazet
2023-06-06 9:40 ` Simon Horman
@ 2023-06-06 17:42 ` Kuniyuki Iwashima
1 sibling, 0 replies; 8+ messages in thread
From: Kuniyuki Iwashima @ 2023-06-06 17:42 UTC (permalink / raw)
To: edumazet; +Cc: davem, eric.dumazet, kuba, netdev, pabeni, simon.horman, kuniyu
From: Eric Dumazet <edumazet@google.com>
Date: Tue, 6 Jun 2023 07:41:15 +0000
> Add READ_ONCE()/WRITE_ONCE() on accesses to the sock flow table.
>
> This also prevents a (smart ?) compiler to remove the condition in:
>
> if (table->ents[index] != newval)
> table->ents[index] = newval;
>
> We need the condition to avoid dirtying a shared cache line.
>
> Fixes: fec5e652e58f ("rfs: Receive Flow Steering")
> Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
> ---
> include/linux/netdevice.h | 7 +++++--
> net/core/dev.c | 6 ++++--
> 2 files changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index 08fbd4622ccf731daaee34ad99773d6dc2e82fa6..e6f22b7403d014a2cf4d81d931109a594ce1398e 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -768,8 +768,11 @@ static inline void rps_record_sock_flow(struct rps_sock_flow_table *table,
> /* We only give a hint, preemption can change CPU under us */
> val |= raw_smp_processor_id();
>
> - if (table->ents[index] != val)
> - table->ents[index] = val;
> + /* The following WRITE_ONCE() is paired with the READ_ONCE()
> + * here, and another one in get_rps_cpu().
> + */
> + if (READ_ONCE(table->ents[index]) != val)
> + WRITE_ONCE(table->ents[index], val);
> }
> }
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index b3c13e0419356b943e90b1f46dd7e035c6ec1a9c..1495f8aff288e944c8cab21297f244a6fcde752f 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -4471,8 +4471,10 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
> u32 next_cpu;
> u32 ident;
>
> - /* First check into global flow table if there is a match */
> - ident = sock_flow_table->ents[hash & sock_flow_table->mask];
> + /* First check into global flow table if there is a match.
> + * This READ_ONCE() pairs with WRITE_ONCE() from rps_record_sock_flow().
> + */
> + ident = READ_ONCE(sock_flow_table->ents[hash & sock_flow_table->mask]);
> if ((ident ^ hash) & ~rps_cpu_mask)
> goto try_rps;
>
> --
> 2.41.0.rc0.172.g3f132b7071-goog
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 net 0/2] rfs: annotate lockless accesses
2023-06-06 7:41 [PATCH v2 net 0/2] rfs: annotate lockless accesses Eric Dumazet
2023-06-06 7:41 ` [PATCH v2 net 1/2] rfs: annotate lockless accesses to sk->sk_rxhash Eric Dumazet
2023-06-06 7:41 ` [PATCH v2 net 2/2] rfs: annotate lockless accesses to RFS sock flow table Eric Dumazet
@ 2023-06-07 9:20 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 8+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-06-07 9:20 UTC (permalink / raw)
To: Eric Dumazet; +Cc: davem, kuba, pabeni, simon.horman, netdev, eric.dumazet
Hello:
This series was applied to netdev/net.git (main)
by David S. Miller <davem@davemloft.net>:
On Tue, 6 Jun 2023 07:41:13 +0000 you wrote:
> rfs runs without locks held, so we should annotate
> read and writes to shared variables.
>
> It should prevent compilers forcing writes
> in the following situation:
>
> if (var != val)
> var = val;
>
> [...]
Here is the summary with links:
- [v2,net,1/2] rfs: annotate lockless accesses to sk->sk_rxhash
https://git.kernel.org/netdev/net/c/1e5c647c3f6d
- [v2,net,2/2] rfs: annotate lockless accesses to RFS sock flow table
https://git.kernel.org/netdev/net/c/5c3b74a92aa2
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 8+ messages in thread