* [PATCH v2 net-next 0/5] net: better drop accounting
@ 2025-08-26 12:50 Eric Dumazet
2025-08-26 12:50 ` [PATCH v2 net-next 1/5] net: add sk_drops_read(), sk_drops_inc() and sk_drops_reset() helpers Eric Dumazet
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Eric Dumazet @ 2025-08-26 12:50 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, netdev, eric.dumazet, Willem de Bruijn,
Kuniyuki Iwashima, Eric Dumazet
Incrementing sk->sk_drops for every dropped packet can
cause serious cache line contention under DOS.
Add optional sk->sk_drop_counters pointer so that
protocols can opt-in to use two dedicated cache lines
to hold drop counters.
Convert UDP and RAW to use this infrastructure.
Tested on UDP (see patch 4/5 for details)
Before:
nstat -n ; sleep 1 ; nstat | grep Udp
Udp6InDatagrams 615091 0.0
Udp6InErrors 3904277 0.0
Udp6RcvbufErrors 3904277 0.0
After:
nstat -n ; sleep 1 ; nstat | grep Udp
Udp6InDatagrams 816281 0.0
Udp6InErrors 7497093 0.0
Udp6RcvbufErrors 7497093 0.0
Eric Dumazet (5):
net: add sk_drops_read(), sk_drops_inc() and sk_drops_reset() helpers
net: add sk_drops_skbadd() helper
net: add sk->sk_drop_counters
udp: add drop_counters to udp socket
inet: raw: add drop_counters to raw sockets
include/linux/ipv6.h | 2 +-
include/linux/skmsg.h | 2 +-
include/linux/udp.h | 1 +
include/net/raw.h | 1 +
include/net/sock.h | 56 ++++++++++++++++++-
include/net/tcp.h | 2 +-
include/net/udp.h | 3 +-
net/core/datagram.c | 2 +-
net/core/sock.c | 16 +++---
net/ipv4/ping.c | 2 +-
net/ipv4/raw.c | 7 ++-
net/ipv4/tcp_input.c | 2 +-
net/ipv4/tcp_ipv4.c | 4 +-
net/ipv4/udp.c | 14 ++---
net/ipv6/datagram.c | 2 +-
net/ipv6/raw.c | 9 +--
net/ipv6/tcp_ipv6.c | 4 +-
net/ipv6/udp.c | 6 +-
net/iucv/af_iucv.c | 4 +-
net/mptcp/protocol.c | 2 +-
net/netlink/af_netlink.c | 4 +-
net/packet/af_packet.c | 2 +-
net/phonet/pep.c | 6 +-
net/phonet/socket.c | 2 +-
net/sctp/diag.c | 2 +-
net/tipc/socket.c | 6 +-
.../selftests/bpf/progs/bpf_iter_udp4.c | 3 +-
.../selftests/bpf/progs/bpf_iter_udp6.c | 4 +-
28 files changed, 114 insertions(+), 56 deletions(-)
--
2.51.0.261.g7ce5a0a67e-goog
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 net-next 1/5] net: add sk_drops_read(), sk_drops_inc() and sk_drops_reset() helpers
2025-08-26 12:50 [PATCH v2 net-next 0/5] net: better drop accounting Eric Dumazet
@ 2025-08-26 12:50 ` Eric Dumazet
2025-08-26 12:50 ` [PATCH v2 net-next 2/5] net: add sk_drops_skbadd() helper Eric Dumazet
` (4 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Eric Dumazet @ 2025-08-26 12:50 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, netdev, eric.dumazet, Willem de Bruijn,
Kuniyuki Iwashima, Eric Dumazet
We want to split sk->sk_drops in the future to reduce
potential contention on this field.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
---
include/net/sock.h | 17 ++++++++++++++++-
include/net/tcp.h | 2 +-
net/core/datagram.c | 2 +-
net/core/sock.c | 14 +++++++-------
net/ipv4/ping.c | 2 +-
net/ipv4/raw.c | 6 +++---
net/ipv4/udp.c | 14 +++++++-------
net/ipv6/datagram.c | 2 +-
net/ipv6/raw.c | 8 ++++----
net/ipv6/udp.c | 6 +++---
net/iucv/af_iucv.c | 4 ++--
net/netlink/af_netlink.c | 4 ++--
net/packet/af_packet.c | 2 +-
net/phonet/pep.c | 6 +++---
net/phonet/socket.c | 2 +-
net/sctp/diag.c | 2 +-
net/tipc/socket.c | 6 +++---
17 files changed, 57 insertions(+), 42 deletions(-)
diff --git a/include/net/sock.h b/include/net/sock.h
index 63a6a48afb48ad31abf05f5108886bac9831842a..34d7029eb622773e40e7c4ebd422d33b1c0a7836 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -2682,11 +2682,26 @@ struct sock_skb_cb {
#define sock_skb_cb_check_size(size) \
BUILD_BUG_ON((size) > SOCK_SKB_CB_OFFSET)
+static inline void sk_drops_inc(struct sock *sk)
+{
+ atomic_inc(&sk->sk_drops);
+}
+
+static inline int sk_drops_read(const struct sock *sk)
+{
+ return atomic_read(&sk->sk_drops);
+}
+
+static inline void sk_drops_reset(struct sock *sk)
+{
+ atomic_set(&sk->sk_drops, 0);
+}
+
static inline void
sock_skb_set_dropcount(const struct sock *sk, struct sk_buff *skb)
{
SOCK_SKB_CB(skb)->dropcount = sock_flag(sk, SOCK_RXQ_OVFL) ?
- atomic_read(&sk->sk_drops) : 0;
+ sk_drops_read(sk) : 0;
}
static inline void sk_drops_add(struct sock *sk, const struct sk_buff *skb)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 2936b8175950faa777f81f3c6b7230bcc375d772..16dc9cebb9d25832eac7a6ad590a9e9e47e85142 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -2612,7 +2612,7 @@ static inline void tcp_segs_in(struct tcp_sock *tp, const struct sk_buff *skb)
*/
static inline void tcp_listendrop(const struct sock *sk)
{
- atomic_inc(&((struct sock *)sk)->sk_drops);
+ sk_drops_inc((struct sock *)sk);
__NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENDROPS);
}
diff --git a/net/core/datagram.c b/net/core/datagram.c
index 94cc4705e91da6ba6629ae469ae6507e9c6fdae9..ba8253aa6e07c2b0db361c9dfdaf66243dc1024c 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -345,7 +345,7 @@ int __sk_queue_drop_skb(struct sock *sk, struct sk_buff_head *sk_queue,
spin_unlock_bh(&sk_queue->lock);
}
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
return err;
}
EXPORT_SYMBOL(__sk_queue_drop_skb);
diff --git a/net/core/sock.c b/net/core/sock.c
index 8002ac6293dcac694962be139eadfa6346b72d5b..75368823969a7992a55a6f40d87ffb8886de2f39 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -491,13 +491,13 @@ int __sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
struct sk_buff_head *list = &sk->sk_receive_queue;
if (atomic_read(&sk->sk_rmem_alloc) >= READ_ONCE(sk->sk_rcvbuf)) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
trace_sock_rcvqueue_full(sk, skb);
return -ENOMEM;
}
if (!sk_rmem_schedule(sk, skb, skb->truesize)) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
return -ENOBUFS;
}
@@ -562,7 +562,7 @@ int __sk_receive_skb(struct sock *sk, struct sk_buff *skb,
skb->dev = NULL;
if (sk_rcvqueues_full(sk, READ_ONCE(sk->sk_rcvbuf))) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
reason = SKB_DROP_REASON_SOCKET_RCVBUFF;
goto discard_and_relse;
}
@@ -585,7 +585,7 @@ int __sk_receive_skb(struct sock *sk, struct sk_buff *skb,
reason = SKB_DROP_REASON_PFMEMALLOC;
if (err == -ENOBUFS)
reason = SKB_DROP_REASON_SOCKET_BACKLOG;
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
goto discard_and_relse;
}
@@ -2505,7 +2505,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
newsk->sk_wmem_queued = 0;
newsk->sk_forward_alloc = 0;
newsk->sk_reserved_mem = 0;
- atomic_set(&newsk->sk_drops, 0);
+ sk_drops_reset(newsk);
newsk->sk_send_head = NULL;
newsk->sk_userlocks = sk->sk_userlocks & ~SOCK_BINDPORT_LOCK;
atomic_set(&newsk->sk_zckey, 0);
@@ -3713,7 +3713,7 @@ void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid)
*/
smp_wmb();
refcount_set(&sk->sk_refcnt, 1);
- atomic_set(&sk->sk_drops, 0);
+ sk_drops_reset(sk);
}
EXPORT_SYMBOL(sock_init_data_uid);
@@ -3973,7 +3973,7 @@ void sk_get_meminfo(const struct sock *sk, u32 *mem)
mem[SK_MEMINFO_WMEM_QUEUED] = READ_ONCE(sk->sk_wmem_queued);
mem[SK_MEMINFO_OPTMEM] = atomic_read(&sk->sk_omem_alloc);
mem[SK_MEMINFO_BACKLOG] = READ_ONCE(sk->sk_backlog.len);
- mem[SK_MEMINFO_DROPS] = atomic_read(&sk->sk_drops);
+ mem[SK_MEMINFO_DROPS] = sk_drops_read(sk);
}
#ifdef CONFIG_PROC_FS
diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
index 031df4c19fcc5ca18137695c78358c3ad96a2c4a..f119da68fc301be00719213ad33615b6754e6272 100644
--- a/net/ipv4/ping.c
+++ b/net/ipv4/ping.c
@@ -1119,7 +1119,7 @@ static void ping_v4_format_sock(struct sock *sp, struct seq_file *f,
from_kuid_munged(seq_user_ns(f), sk_uid(sp)),
0, sock_i_ino(sp),
refcount_read(&sp->sk_refcnt), sp,
- atomic_read(&sp->sk_drops));
+ sk_drops_read(sp));
}
static int ping_v4_seq_show(struct seq_file *seq, void *v)
diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
index 1d2c89d63cc71f39d742c8156879847fc4e53c71..0f9f02f6146eef6df3f5bbb4f564e16fbabd1ba2 100644
--- a/net/ipv4/raw.c
+++ b/net/ipv4/raw.c
@@ -178,7 +178,7 @@ static int raw_v4_input(struct net *net, struct sk_buff *skb,
if (atomic_read(&sk->sk_rmem_alloc) >=
READ_ONCE(sk->sk_rcvbuf)) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
continue;
}
@@ -311,7 +311,7 @@ static int raw_rcv_skb(struct sock *sk, struct sk_buff *skb)
int raw_rcv(struct sock *sk, struct sk_buff *skb)
{
if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb)) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
sk_skb_reason_drop(sk, skb, SKB_DROP_REASON_XFRM_POLICY);
return NET_RX_DROP;
}
@@ -1045,7 +1045,7 @@ static void raw_sock_seq_show(struct seq_file *seq, struct sock *sp, int i)
0, 0L, 0,
from_kuid_munged(seq_user_ns(seq), sk_uid(sp)),
0, sock_i_ino(sp),
- refcount_read(&sp->sk_refcnt), sp, atomic_read(&sp->sk_drops));
+ refcount_read(&sp->sk_refcnt), sp, sk_drops_read(sp));
}
static int raw_seq_show(struct seq_file *seq, void *v)
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index cc3ce0f762ec211a963464c2dd7ac329a6be1ffd..732bdad43626948168bdb9e40c151787f047bbfd 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1787,7 +1787,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
drop:
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
busylock_release(busy);
return err;
}
@@ -1852,7 +1852,7 @@ static struct sk_buff *__first_packet_length(struct sock *sk,
IS_UDPLITE(sk));
__UDP_INC_STATS(sock_net(sk), UDP_MIB_INERRORS,
IS_UDPLITE(sk));
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
__skb_unlink(skb, rcvq);
*total += skb->truesize;
kfree_skb_reason(skb, SKB_DROP_REASON_UDP_CSUM);
@@ -2008,7 +2008,7 @@ int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
__UDP_INC_STATS(net, UDP_MIB_CSUMERRORS, is_udplite);
__UDP_INC_STATS(net, UDP_MIB_INERRORS, is_udplite);
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
kfree_skb_reason(skb, SKB_DROP_REASON_UDP_CSUM);
goto try_again;
}
@@ -2078,7 +2078,7 @@ int udp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int flags,
if (unlikely(err)) {
if (!peeking) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
UDP_INC_STATS(sock_net(sk),
UDP_MIB_INERRORS, is_udplite);
}
@@ -2449,7 +2449,7 @@ static int udp_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
__UDP_INC_STATS(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite);
drop:
__UDP_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
sk_skb_reason_drop(sk, skb, drop_reason);
return -1;
}
@@ -2534,7 +2534,7 @@ static int __udp4_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
nskb = skb_clone(skb, GFP_ATOMIC);
if (unlikely(!nskb)) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
__UDP_INC_STATS(net, UDP_MIB_RCVBUFERRORS,
IS_UDPLITE(sk));
__UDP_INC_STATS(net, UDP_MIB_INERRORS,
@@ -3386,7 +3386,7 @@ static void udp4_format_sock(struct sock *sp, struct seq_file *f,
from_kuid_munged(seq_user_ns(f), sk_uid(sp)),
0, sock_i_ino(sp),
refcount_read(&sp->sk_refcnt), sp,
- atomic_read(&sp->sk_drops));
+ sk_drops_read(sp));
}
int udp4_seq_show(struct seq_file *seq, void *v)
diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
index 972bf0426d599af43bfd2d0e4236592f34ec7866..33ebe93d80e3cb6d897a3c7f714f94c395856023 100644
--- a/net/ipv6/datagram.c
+++ b/net/ipv6/datagram.c
@@ -1068,5 +1068,5 @@ void __ip6_dgram_sock_seq_show(struct seq_file *seq, struct sock *sp,
0,
sock_i_ino(sp),
refcount_read(&sp->sk_refcnt), sp,
- atomic_read(&sp->sk_drops));
+ sk_drops_read(sp));
}
diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
index 4c3f8245c40f155f3efde0d7b8af50e0bef431c7..4026192143ec9f1b071f43874185bc367c950c67 100644
--- a/net/ipv6/raw.c
+++ b/net/ipv6/raw.c
@@ -163,7 +163,7 @@ static bool ipv6_raw_deliver(struct sk_buff *skb, int nexthdr)
if (atomic_read(&sk->sk_rmem_alloc) >=
READ_ONCE(sk->sk_rcvbuf)) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
continue;
}
@@ -361,7 +361,7 @@ static inline int rawv6_rcv_skb(struct sock *sk, struct sk_buff *skb)
if ((raw6_sk(sk)->checksum || rcu_access_pointer(sk->sk_filter)) &&
skb_checksum_complete(skb)) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
sk_skb_reason_drop(sk, skb, SKB_DROP_REASON_SKB_CSUM);
return NET_RX_DROP;
}
@@ -389,7 +389,7 @@ int rawv6_rcv(struct sock *sk, struct sk_buff *skb)
struct raw6_sock *rp = raw6_sk(sk);
if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
sk_skb_reason_drop(sk, skb, SKB_DROP_REASON_XFRM_POLICY);
return NET_RX_DROP;
}
@@ -414,7 +414,7 @@ int rawv6_rcv(struct sock *sk, struct sk_buff *skb)
if (inet_test_bit(HDRINCL, sk)) {
if (skb_checksum_complete(skb)) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
sk_skb_reason_drop(sk, skb, SKB_DROP_REASON_SKB_CSUM);
return NET_RX_DROP;
}
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 6a68f77da44b55baed42b44c936902f865754140..a35ee6d693a8080b9009f61d23fafd2465b8c625 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -524,7 +524,7 @@ int udpv6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
}
if (unlikely(err)) {
if (!peeking) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
SNMP_INC_STATS(mib, UDP_MIB_INERRORS);
}
kfree_skb(skb);
@@ -908,7 +908,7 @@ static int udpv6_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
__UDP6_INC_STATS(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite);
drop:
__UDP6_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
sk_skb_reason_drop(sk, skb, drop_reason);
return -1;
}
@@ -1013,7 +1013,7 @@ static int __udp6_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
}
nskb = skb_clone(skb, GFP_ATOMIC);
if (unlikely(!nskb)) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
__UDP6_INC_STATS(net, UDP_MIB_RCVBUFERRORS,
IS_UDPLITE(sk));
__UDP6_INC_STATS(net, UDP_MIB_INERRORS,
diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
index cc2b3c44bc05a629d455e99369491b28b4b93884..6c717a7ef292831b49c1dca22ecc2bb7a7179b0f 100644
--- a/net/iucv/af_iucv.c
+++ b/net/iucv/af_iucv.c
@@ -1187,7 +1187,7 @@ static void iucv_process_message(struct sock *sk, struct sk_buff *skb,
IUCV_SKB_CB(skb)->offset = 0;
if (sk_filter(sk, skb)) {
- atomic_inc(&sk->sk_drops); /* skb rejected by filter */
+ sk_drops_inc(sk); /* skb rejected by filter */
kfree_skb(skb);
return;
}
@@ -2011,7 +2011,7 @@ static int afiucv_hs_callback_rx(struct sock *sk, struct sk_buff *skb)
skb_reset_network_header(skb);
IUCV_SKB_CB(skb)->offset = 0;
if (sk_filter(sk, skb)) {
- atomic_inc(&sk->sk_drops); /* skb rejected by filter */
+ sk_drops_inc(sk); /* skb rejected by filter */
kfree_skb(skb);
return NET_RX_SUCCESS;
}
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
index e2f7080dd5d7cd52722248b719c294cdccf70328..2b46c0cd752a313ad95cf17c46237883d6b85293 100644
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -356,7 +356,7 @@ static void netlink_overrun(struct sock *sk)
sk_error_report(sk);
}
}
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
}
static void netlink_rcv_wake(struct sock *sk)
@@ -2711,7 +2711,7 @@ static int netlink_native_seq_show(struct seq_file *seq, void *v)
sk_wmem_alloc_get(s),
READ_ONCE(nlk->cb_running),
refcount_read(&s->sk_refcnt),
- atomic_read(&s->sk_drops),
+ sk_drops_read(s),
sock_i_ino(s)
);
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index a7017d7f09272058106181e95367080dc821da69..9d42c4bd6e390c7212fc0a8dde5cc14ba7a00d53 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -2265,7 +2265,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev,
drop_n_acct:
atomic_inc(&po->tp_drops);
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
drop_reason = SKB_DROP_REASON_PACKET_SOCK_ERROR;
drop_n_restore:
diff --git a/net/phonet/pep.c b/net/phonet/pep.c
index 62527e1ebb883d2854bcdc5256cd48e85e5c5dbc..4db564d9d522b639e9527d48eaa42a1cd9fbfba7 100644
--- a/net/phonet/pep.c
+++ b/net/phonet/pep.c
@@ -376,7 +376,7 @@ static int pipe_do_rcv(struct sock *sk, struct sk_buff *skb)
case PNS_PEP_CTRL_REQ:
if (skb_queue_len(&pn->ctrlreq_queue) >= PNPIPE_CTRLREQ_MAX) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
break;
}
__skb_pull(skb, 4);
@@ -397,7 +397,7 @@ static int pipe_do_rcv(struct sock *sk, struct sk_buff *skb)
}
if (pn->rx_credits == 0) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
err = -ENOBUFS;
break;
}
@@ -567,7 +567,7 @@ static int pipe_handler_do_rcv(struct sock *sk, struct sk_buff *skb)
}
if (pn->rx_credits == 0) {
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
err = NET_RX_DROP;
break;
}
diff --git a/net/phonet/socket.c b/net/phonet/socket.c
index 2b61a40b568e91e340130a9b589e2b7a9346643f..db2d552e9b32e384c332774b99199108abd464f2 100644
--- a/net/phonet/socket.c
+++ b/net/phonet/socket.c
@@ -587,7 +587,7 @@ static int pn_sock_seq_show(struct seq_file *seq, void *v)
from_kuid_munged(seq_user_ns(seq), sk_uid(sk)),
sock_i_ino(sk),
refcount_read(&sk->sk_refcnt), sk,
- atomic_read(&sk->sk_drops));
+ sk_drops_read(sk));
}
seq_pad(seq, '\n');
return 0;
diff --git a/net/sctp/diag.c b/net/sctp/diag.c
index 23359e522273f0377080007c75eb2c276945f781..996c2018f0e611bd0da2df2f73e90e2f94c463d9 100644
--- a/net/sctp/diag.c
+++ b/net/sctp/diag.c
@@ -173,7 +173,7 @@ static int inet_sctp_diag_fill(struct sock *sk, struct sctp_association *asoc,
mem[SK_MEMINFO_WMEM_QUEUED] = sk->sk_wmem_queued;
mem[SK_MEMINFO_OPTMEM] = atomic_read(&sk->sk_omem_alloc);
mem[SK_MEMINFO_BACKLOG] = READ_ONCE(sk->sk_backlog.len);
- mem[SK_MEMINFO_DROPS] = atomic_read(&sk->sk_drops);
+ mem[SK_MEMINFO_DROPS] = sk_drops_read(sk);
if (nla_put(skb, INET_DIAG_SKMEMINFO, sizeof(mem), &mem) < 0)
goto errout;
diff --git a/net/tipc/socket.c b/net/tipc/socket.c
index e028bf6584992c5ab7307d81082fbe4582e78068..1574a83384f88533cfab330c559512d5878bf0aa 100644
--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -2366,7 +2366,7 @@ static void tipc_sk_filter_rcv(struct sock *sk, struct sk_buff *skb,
else if (sk_rmem_alloc_get(sk) + skb->truesize >= limit) {
trace_tipc_sk_dump(sk, skb, TIPC_DUMP_ALL,
"err_overload2!");
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
err = TIPC_ERR_OVERLOAD;
}
@@ -2458,7 +2458,7 @@ static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk,
trace_tipc_sk_dump(sk, skb, TIPC_DUMP_ALL, "err_overload!");
/* Overload => reject message back to sender */
onode = tipc_own_addr(sock_net(sk));
- atomic_inc(&sk->sk_drops);
+ sk_drops_inc(sk);
if (tipc_msg_reverse(onode, &skb, TIPC_ERR_OVERLOAD)) {
trace_tipc_sk_rej_msg(sk, skb, TIPC_DUMP_ALL,
"@sk_enqueue!");
@@ -3657,7 +3657,7 @@ int tipc_sk_fill_sock_diag(struct sk_buff *skb, struct netlink_callback *cb,
nla_put_u32(skb, TIPC_NLA_SOCK_STAT_SENDQ,
skb_queue_len(&sk->sk_write_queue)) ||
nla_put_u32(skb, TIPC_NLA_SOCK_STAT_DROP,
- atomic_read(&sk->sk_drops)))
+ sk_drops_read(sk)))
goto stat_msg_cancel;
if (tsk->cong_link_cnt &&
--
2.51.0.261.g7ce5a0a67e-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 net-next 2/5] net: add sk_drops_skbadd() helper
2025-08-26 12:50 [PATCH v2 net-next 0/5] net: better drop accounting Eric Dumazet
2025-08-26 12:50 ` [PATCH v2 net-next 1/5] net: add sk_drops_read(), sk_drops_inc() and sk_drops_reset() helpers Eric Dumazet
@ 2025-08-26 12:50 ` Eric Dumazet
2025-08-27 5:28 ` Kuniyuki Iwashima
2025-08-26 12:50 ` [PATCH v2 net-next 3/5] net: add sk->sk_drop_counters Eric Dumazet
` (3 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2025-08-26 12:50 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, netdev, eric.dumazet, Willem de Bruijn,
Kuniyuki Iwashima, Eric Dumazet
Existing sk_drops_add() helper is renamed to sk_drops_skbadd().
Add sk_drops_add() and convert sk_drops_inc() to use it.
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/linux/skmsg.h | 2 +-
include/net/sock.h | 11 ++++++++---
include/net/udp.h | 2 +-
net/ipv4/tcp_input.c | 2 +-
net/ipv4/tcp_ipv4.c | 4 ++--
net/ipv6/tcp_ipv6.c | 4 ++--
net/mptcp/protocol.c | 2 +-
7 files changed, 16 insertions(+), 11 deletions(-)
diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
index 0b9095a281b8988dfd06c69254d2bcbedcfaf6b4..49847888c287ab980531c4b78f5ab4aac5018240 100644
--- a/include/linux/skmsg.h
+++ b/include/linux/skmsg.h
@@ -315,7 +315,7 @@ static inline bool sk_psock_test_state(const struct sk_psock *psock,
static inline void sock_drop(struct sock *sk, struct sk_buff *skb)
{
- sk_drops_add(sk, skb);
+ sk_drops_skbadd(sk, skb);
kfree_skb(skb);
}
diff --git a/include/net/sock.h b/include/net/sock.h
index 34d7029eb622773e40e7c4ebd422d33b1c0a7836..9edb42ff06224cb8a1dd4f84af25bc22d1803ca9 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -2682,9 +2682,14 @@ struct sock_skb_cb {
#define sock_skb_cb_check_size(size) \
BUILD_BUG_ON((size) > SOCK_SKB_CB_OFFSET)
+static inline void sk_drops_add(struct sock *sk, int segs)
+{
+ atomic_add(segs, &sk->sk_drops);
+}
+
static inline void sk_drops_inc(struct sock *sk)
{
- atomic_inc(&sk->sk_drops);
+ sk_drops_add(sk, 1);
}
static inline int sk_drops_read(const struct sock *sk)
@@ -2704,11 +2709,11 @@ sock_skb_set_dropcount(const struct sock *sk, struct sk_buff *skb)
sk_drops_read(sk) : 0;
}
-static inline void sk_drops_add(struct sock *sk, const struct sk_buff *skb)
+static inline void sk_drops_skbadd(struct sock *sk, const struct sk_buff *skb)
{
int segs = max_t(u16, 1, skb_shinfo(skb)->gso_segs);
- atomic_add(segs, &sk->sk_drops);
+ sk_drops_add(sk, segs);
}
static inline ktime_t sock_read_timestamp(struct sock *sk)
diff --git a/include/net/udp.h b/include/net/udp.h
index e2af3bda90c9327105bb329927fd3521e51926d8..7b26d4c50f33b94507933c407531c14b8edd306a 100644
--- a/include/net/udp.h
+++ b/include/net/udp.h
@@ -627,7 +627,7 @@ static inline struct sk_buff *udp_rcv_segment(struct sock *sk,
return segs;
drop:
- atomic_add(drop_count, &sk->sk_drops);
+ sk_drops_add(sk, drop_count);
SNMP_ADD_STATS(__UDPX_MIB(sk, ipv4), UDP_MIB_INERRORS, drop_count);
kfree_skb(skb);
return NULL;
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index a52a747d8a55e6a405d2fb1608e979abceb51c07..f1be65af1a777a803ae402c933e539cdabff7202 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -4830,7 +4830,7 @@ static bool tcp_ooo_try_coalesce(struct sock *sk,
noinline_for_tracing static void
tcp_drop_reason(struct sock *sk, struct sk_buff *skb, enum skb_drop_reason reason)
{
- sk_drops_add(sk, skb);
+ sk_drops_skbadd(sk, skb);
sk_skb_reason_drop(sk, skb, reason);
}
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index a0c93b24c6e0ca2eb477686e477d164b0b132e7a..7c1d612afca18b424b32ee5e97b99a68062d8436 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -2254,7 +2254,7 @@ int tcp_v4_rcv(struct sk_buff *skb)
&iph->saddr, &iph->daddr,
AF_INET, dif, sdif);
if (unlikely(drop_reason)) {
- sk_drops_add(sk, skb);
+ sk_drops_skbadd(sk, skb);
reqsk_put(req);
goto discard_it;
}
@@ -2399,7 +2399,7 @@ int tcp_v4_rcv(struct sk_buff *skb)
return 0;
discard_and_relse:
- sk_drops_add(sk, skb);
+ sk_drops_skbadd(sk, skb);
if (refcounted)
sock_put(sk);
goto discard_it;
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 8b2e7b7afbd847b5d94b30ab27779e4dc705710d..b4e56b8772730579cb85f10b147a15acce03f8e4 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1809,7 +1809,7 @@ INDIRECT_CALLABLE_SCOPE int tcp_v6_rcv(struct sk_buff *skb)
&hdr->saddr, &hdr->daddr,
AF_INET6, dif, sdif);
if (drop_reason) {
- sk_drops_add(sk, skb);
+ sk_drops_skbadd(sk, skb);
reqsk_put(req);
goto discard_it;
}
@@ -1948,7 +1948,7 @@ INDIRECT_CALLABLE_SCOPE int tcp_v6_rcv(struct sk_buff *skb)
return 0;
discard_and_relse:
- sk_drops_add(sk, skb);
+ sk_drops_skbadd(sk, skb);
if (refcounted)
sock_put(sk);
goto discard_it;
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index f2e728239480444ffdb297efc35303848d4c4a31..ad41c48126e44fda646f1ec1c81957db1407a6cc 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -137,7 +137,7 @@ struct sock *__mptcp_nmpc_sk(struct mptcp_sock *msk)
static void mptcp_drop(struct sock *sk, struct sk_buff *skb)
{
- sk_drops_add(sk, skb);
+ sk_drops_skbadd(sk, skb);
__kfree_skb(skb);
}
--
2.51.0.261.g7ce5a0a67e-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 net-next 3/5] net: add sk->sk_drop_counters
2025-08-26 12:50 [PATCH v2 net-next 0/5] net: better drop accounting Eric Dumazet
2025-08-26 12:50 ` [PATCH v2 net-next 1/5] net: add sk_drops_read(), sk_drops_inc() and sk_drops_reset() helpers Eric Dumazet
2025-08-26 12:50 ` [PATCH v2 net-next 2/5] net: add sk_drops_skbadd() helper Eric Dumazet
@ 2025-08-26 12:50 ` Eric Dumazet
2025-08-27 5:30 ` Kuniyuki Iwashima
2025-08-26 12:50 ` [PATCH v2 net-next 4/5] udp: add drop_counters to udp socket Eric Dumazet
` (2 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2025-08-26 12:50 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, netdev, eric.dumazet, Willem de Bruijn,
Kuniyuki Iwashima, Eric Dumazet
Some sockets suffer from heavy false sharing on sk->sk_drops,
and fields in the same cache line.
Add sk->sk_drop_counters to:
- move the drop counter(s) to dedicated cache lines.
- Add basic NUMA awareness to these drop counter(s).
Following patches will use this infrastructure for UDP and RAW sockets.
sk_clone_lock() is not yet ready, it would need to properly
set newsk->sk_drop_counters if we plan to use this for TCP sockets.
v2: used Paolo suggestion from https://lore.kernel.org/netdev/8f09830a-d83d-43c9-b36b-88ba0a23e9b2@redhat.com/
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/net/sock.h | 32 +++++++++++++++++++++++++++++++-
net/core/sock.c | 2 ++
2 files changed, 33 insertions(+), 1 deletion(-)
diff --git a/include/net/sock.h b/include/net/sock.h
index 9edb42ff06224cb8a1dd4f84af25bc22d1803ca9..73cd3316e288bde912dd96637e52d226575c2ffd 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -102,6 +102,11 @@ struct net;
typedef __u32 __bitwise __portpair;
typedef __u64 __bitwise __addrpair;
+struct socket_drop_counters {
+ atomic_t drops0 ____cacheline_aligned_in_smp;
+ atomic_t drops1 ____cacheline_aligned_in_smp;
+};
+
/**
* struct sock_common - minimal network layer representation of sockets
* @skc_daddr: Foreign IPv4 addr
@@ -282,6 +287,7 @@ struct sk_filter;
* @sk_err_soft: errors that don't cause failure but are the cause of a
* persistent failure not just 'timed out'
* @sk_drops: raw/udp drops counter
+ * @sk_drop_counters: optional pointer to socket_drop_counters
* @sk_ack_backlog: current listen backlog
* @sk_max_ack_backlog: listen backlog set in listen()
* @sk_uid: user id of owner
@@ -449,6 +455,7 @@ struct sock {
#ifdef CONFIG_XFRM
struct xfrm_policy __rcu *sk_policy[2];
#endif
+ struct socket_drop_counters *sk_drop_counters;
__cacheline_group_end(sock_read_rxtx);
__cacheline_group_begin(sock_write_rxtx);
@@ -2684,7 +2691,18 @@ struct sock_skb_cb {
static inline void sk_drops_add(struct sock *sk, int segs)
{
- atomic_add(segs, &sk->sk_drops);
+ struct socket_drop_counters *sdc = sk->sk_drop_counters;
+
+ if (sdc) {
+ int n = numa_node_id() % 2;
+
+ if (n)
+ atomic_add(segs, &sdc->drops1);
+ else
+ atomic_add(segs, &sdc->drops0);
+ } else {
+ atomic_add(segs, &sk->sk_drops);
+ }
}
static inline void sk_drops_inc(struct sock *sk)
@@ -2694,11 +2712,23 @@ static inline void sk_drops_inc(struct sock *sk)
static inline int sk_drops_read(const struct sock *sk)
{
+ const struct socket_drop_counters *sdc = sk->sk_drop_counters;
+
+ if (sdc) {
+ DEBUG_NET_WARN_ON_ONCE(atomic_read(&sk->sk_drops));
+ return atomic_read(&sdc->drops0) + atomic_read(&sdc->drops1);
+ }
return atomic_read(&sk->sk_drops);
}
static inline void sk_drops_reset(struct sock *sk)
{
+ struct socket_drop_counters *sdc = sk->sk_drop_counters;
+
+ if (sdc) {
+ atomic_set(&sdc->drops0, 0);
+ atomic_set(&sdc->drops1, 0);
+ }
atomic_set(&sk->sk_drops, 0);
}
diff --git a/net/core/sock.c b/net/core/sock.c
index 75368823969a7992a55a6f40d87ffb8886de2f39..e66ad1ec3a2d969b71835a492806563519459749 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2505,6 +2505,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
newsk->sk_wmem_queued = 0;
newsk->sk_forward_alloc = 0;
newsk->sk_reserved_mem = 0;
+ DEBUG_NET_WARN_ON_ONCE(newsk->sk_drop_counters);
sk_drops_reset(newsk);
newsk->sk_send_head = NULL;
newsk->sk_userlocks = sk->sk_userlocks & ~SOCK_BINDPORT_LOCK;
@@ -4457,6 +4458,7 @@ static int __init sock_struct_check(void)
#ifdef CONFIG_MEMCG
CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_read_rxtx, sk_memcg);
#endif
+ CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_read_rxtx, sk_drop_counters);
CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_write_rxtx, sk_lock);
CACHELINE_ASSERT_GROUP_MEMBER(struct sock, sock_write_rxtx, sk_reserved_mem);
--
2.51.0.261.g7ce5a0a67e-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 net-next 4/5] udp: add drop_counters to udp socket
2025-08-26 12:50 [PATCH v2 net-next 0/5] net: better drop accounting Eric Dumazet
` (2 preceding siblings ...)
2025-08-26 12:50 ` [PATCH v2 net-next 3/5] net: add sk->sk_drop_counters Eric Dumazet
@ 2025-08-26 12:50 ` Eric Dumazet
2025-08-27 5:32 ` Kuniyuki Iwashima
2025-08-26 12:50 ` [PATCH v2 net-next 5/5] inet: raw: add drop_counters to raw sockets Eric Dumazet
2025-08-28 12:50 ` [PATCH v2 net-next 0/5] net: better drop accounting patchwork-bot+netdevbpf
5 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2025-08-26 12:50 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, netdev, eric.dumazet, Willem de Bruijn,
Kuniyuki Iwashima, Eric Dumazet
When a packet flood hits one or more UDP sockets, many cpus
have to update sk->sk_drops.
This slows down other cpus, because currently
sk_drops is in sock_write_rx group.
Add a socket_drop_counters structure to udp sockets.
Using dedicated cache lines to hold drop counters
makes sure that consumers no longer suffer from
false sharing if/when producers only change sk->sk_drops.
This adds 128 bytes per UDP socket.
Tested with the following stress test, sending about 11 Mpps
to a dual socket AMD EPYC 7B13 64-Core.
super_netperf 20 -t UDP_STREAM -H DUT -l10 -- -n -P,1000 -m 120
Note: due to socket lookup, only one UDP socket is receiving
packets on DUT.
Then measure receiver (DUT) behavior. We can see both
consumer and BH handlers can process more packets per second.
Before:
nstat -n ; sleep 1 ; nstat | grep Udp
Udp6InDatagrams 615091 0.0
Udp6InErrors 3904277 0.0
Udp6RcvbufErrors 3904277 0.0
After:
nstat -n ; sleep 1 ; nstat | grep Udp
Udp6InDatagrams 816281 0.0
Udp6InErrors 7497093 0.0
Udp6RcvbufErrors 7497093 0.0
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/linux/udp.h | 1 +
include/net/udp.h | 1 +
tools/testing/selftests/bpf/progs/bpf_iter_udp4.c | 3 ++-
tools/testing/selftests/bpf/progs/bpf_iter_udp6.c | 4 ++--
4 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/include/linux/udp.h b/include/linux/udp.h
index 4e1a672af4c57f01d10dde906b2114327387ca73..981506be1e15ad3aa831c1d4884372b2a477f988 100644
--- a/include/linux/udp.h
+++ b/include/linux/udp.h
@@ -108,6 +108,7 @@ struct udp_sock {
* the last UDP socket cacheline.
*/
struct hlist_node tunnel_list;
+ struct socket_drop_counters drop_counters;
};
#define udp_test_bit(nr, sk) \
diff --git a/include/net/udp.h b/include/net/udp.h
index 7b26d4c50f33b94507933c407531c14b8edd306a..93b159f30e884ce7d30e2d2240b846441c5e135b 100644
--- a/include/net/udp.h
+++ b/include/net/udp.h
@@ -288,6 +288,7 @@ static inline void udp_lib_init_sock(struct sock *sk)
{
struct udp_sock *up = udp_sk(sk);
+ sk->sk_drop_counters = &up->drop_counters;
skb_queue_head_init(&up->reader_queue);
INIT_HLIST_NODE(&up->tunnel_list);
up->forward_threshold = sk->sk_rcvbuf >> 2;
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_udp4.c b/tools/testing/selftests/bpf/progs/bpf_iter_udp4.c
index ffbd4b116d17ffbb9f14440c788e50490fb0f4e0..23b2aa2604de2fd0b8075c9b446230125961ae8c 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_udp4.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_udp4.c
@@ -64,7 +64,8 @@ int dump_udp4(struct bpf_iter__udp *ctx)
0, 0L, 0, ctx->uid, 0,
sock_i_ino(&inet->sk),
inet->sk.sk_refcnt.refs.counter, udp_sk,
- inet->sk.sk_drops.counter);
+ udp_sk->drop_counters.drops0.counter +
+ udp_sk->drop_counters.drops1.counter);
return 0;
}
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_udp6.c b/tools/testing/selftests/bpf/progs/bpf_iter_udp6.c
index 47ff7754f4fda4c9db92fbf1dc2e6a68f044174e..c48b05aa2a4b2a008b0e2dcc1d97dc84d67aff68 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_udp6.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_udp6.c
@@ -72,7 +72,7 @@ int dump_udp6(struct bpf_iter__udp *ctx)
0, 0L, 0, ctx->uid, 0,
sock_i_ino(&inet->sk),
inet->sk.sk_refcnt.refs.counter, udp_sk,
- inet->sk.sk_drops.counter);
-
+ udp_sk->drop_counters.drops0.counter +
+ udp_sk->drop_counters.drops1.counter);
return 0;
}
--
2.51.0.261.g7ce5a0a67e-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 net-next 5/5] inet: raw: add drop_counters to raw sockets
2025-08-26 12:50 [PATCH v2 net-next 0/5] net: better drop accounting Eric Dumazet
` (3 preceding siblings ...)
2025-08-26 12:50 ` [PATCH v2 net-next 4/5] udp: add drop_counters to udp socket Eric Dumazet
@ 2025-08-26 12:50 ` Eric Dumazet
2025-08-27 5:34 ` Kuniyuki Iwashima
2025-08-28 12:50 ` [PATCH v2 net-next 0/5] net: better drop accounting patchwork-bot+netdevbpf
5 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2025-08-26 12:50 UTC (permalink / raw)
To: David S . Miller, Jakub Kicinski, Paolo Abeni
Cc: Simon Horman, netdev, eric.dumazet, Willem de Bruijn,
Kuniyuki Iwashima, Eric Dumazet
When a packet flood hits one or more RAW sockets, many cpus
have to update sk->sk_drops.
This slows down other cpus, because currently
sk_drops is in sock_write_rx group.
Add a socket_drop_counters structure to raw sockets.
Using dedicated cache lines to hold drop counters
makes sure that consumers no longer suffer from
false sharing if/when producers only change sk->sk_drops.
This adds 128 bytes per RAW socket.
Signed-off-by: Eric Dumazet <edumazet@google.com>
---
include/linux/ipv6.h | 2 +-
include/net/raw.h | 1 +
net/ipv4/raw.c | 1 +
net/ipv6/raw.c | 1 +
4 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
index bc6ec295917321b38489efb4a82897ad02ee9b52..261d02efb615cfb7fa5717a88c1b2612ef0cbd82 100644
--- a/include/linux/ipv6.h
+++ b/include/linux/ipv6.h
@@ -295,7 +295,7 @@ struct raw6_sock {
__u32 offset; /* checksum offset */
struct icmp6_filter filter;
__u32 ip6mr_table;
-
+ struct socket_drop_counters drop_counters;
struct ipv6_pinfo inet6;
};
diff --git a/include/net/raw.h b/include/net/raw.h
index 32a61481a253b2cf991fc4a3360e56604ef8490d..d5270913906077f88cbd843ed1edde345b4d42d7 100644
--- a/include/net/raw.h
+++ b/include/net/raw.h
@@ -81,6 +81,7 @@ struct raw_sock {
struct inet_sock inet;
struct icmp_filter filter;
u32 ipmr_table;
+ struct socket_drop_counters drop_counters;
};
#define raw_sk(ptr) container_of_const(ptr, struct raw_sock, inet.sk)
diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
index 0f9f02f6146eef6df3f5bbb4f564e16fbabd1ba2..d54ebb7df966d561c8f29b390212a4e6140dcada 100644
--- a/net/ipv4/raw.c
+++ b/net/ipv4/raw.c
@@ -793,6 +793,7 @@ static int raw_sk_init(struct sock *sk)
{
struct raw_sock *rp = raw_sk(sk);
+ sk->sk_drop_counters = &rp->drop_counters;
if (inet_sk(sk)->inet_num == IPPROTO_ICMP)
memset(&rp->filter, 0, sizeof(rp->filter));
return 0;
diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
index 4026192143ec9f1b071f43874185bc367c950c67..4ae07a67b4d4f1be6730c252d246e79ff9c73d4c 100644
--- a/net/ipv6/raw.c
+++ b/net/ipv6/raw.c
@@ -1175,6 +1175,7 @@ static int rawv6_init_sk(struct sock *sk)
{
struct raw6_sock *rp = raw6_sk(sk);
+ sk->sk_drop_counters = &rp->drop_counters;
switch (inet_sk(sk)->inet_num) {
case IPPROTO_ICMPV6:
rp->checksum = 1;
--
2.51.0.261.g7ce5a0a67e-goog
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v2 net-next 2/5] net: add sk_drops_skbadd() helper
2025-08-26 12:50 ` [PATCH v2 net-next 2/5] net: add sk_drops_skbadd() helper Eric Dumazet
@ 2025-08-27 5:28 ` Kuniyuki Iwashima
0 siblings, 0 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2025-08-27 5:28 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, Simon Horman,
netdev, eric.dumazet, Willem de Bruijn
On Tue, Aug 26, 2025 at 5:50 AM Eric Dumazet <edumazet@google.com> wrote:
>
> Existing sk_drops_add() helper is renamed to sk_drops_skbadd().
>
> Add sk_drops_add() and convert sk_drops_inc() to use it.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 net-next 3/5] net: add sk->sk_drop_counters
2025-08-26 12:50 ` [PATCH v2 net-next 3/5] net: add sk->sk_drop_counters Eric Dumazet
@ 2025-08-27 5:30 ` Kuniyuki Iwashima
0 siblings, 0 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2025-08-27 5:30 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, Simon Horman,
netdev, eric.dumazet, Willem de Bruijn
On Tue, Aug 26, 2025 at 5:50 AM Eric Dumazet <edumazet@google.com> wrote:
>
> Some sockets suffer from heavy false sharing on sk->sk_drops,
> and fields in the same cache line.
>
> Add sk->sk_drop_counters to:
>
> - move the drop counter(s) to dedicated cache lines.
> - Add basic NUMA awareness to these drop counter(s).
>
> Following patches will use this infrastructure for UDP and RAW sockets.
>
> sk_clone_lock() is not yet ready, it would need to properly
> set newsk->sk_drop_counters if we plan to use this for TCP sockets.
>
> v2: used Paolo suggestion from https://lore.kernel.org/netdev/8f09830a-d83d-43c9-b36b-88ba0a23e9b2@redhat.com/
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 net-next 4/5] udp: add drop_counters to udp socket
2025-08-26 12:50 ` [PATCH v2 net-next 4/5] udp: add drop_counters to udp socket Eric Dumazet
@ 2025-08-27 5:32 ` Kuniyuki Iwashima
0 siblings, 0 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2025-08-27 5:32 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, Simon Horman,
netdev, eric.dumazet, Willem de Bruijn
On Tue, Aug 26, 2025 at 5:50 AM Eric Dumazet <edumazet@google.com> wrote:
>
> When a packet flood hits one or more UDP sockets, many cpus
> have to update sk->sk_drops.
>
> This slows down other cpus, because currently
> sk_drops is in sock_write_rx group.
>
> Add a socket_drop_counters structure to udp sockets.
>
> Using dedicated cache lines to hold drop counters
> makes sure that consumers no longer suffer from
> false sharing if/when producers only change sk->sk_drops.
>
> This adds 128 bytes per UDP socket.
>
> Tested with the following stress test, sending about 11 Mpps
> to a dual socket AMD EPYC 7B13 64-Core.
>
> super_netperf 20 -t UDP_STREAM -H DUT -l10 -- -n -P,1000 -m 120
> Note: due to socket lookup, only one UDP socket is receiving
> packets on DUT.
>
> Then measure receiver (DUT) behavior. We can see both
> consumer and BH handlers can process more packets per second.
>
> Before:
>
> nstat -n ; sleep 1 ; nstat | grep Udp
> Udp6InDatagrams 615091 0.0
> Udp6InErrors 3904277 0.0
> Udp6RcvbufErrors 3904277 0.0
>
> After:
>
> nstat -n ; sleep 1 ; nstat | grep Udp
> Udp6InDatagrams 816281 0.0
> Udp6InErrors 7497093 0.0
> Udp6RcvbufErrors 7497093 0.0
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 net-next 5/5] inet: raw: add drop_counters to raw sockets
2025-08-26 12:50 ` [PATCH v2 net-next 5/5] inet: raw: add drop_counters to raw sockets Eric Dumazet
@ 2025-08-27 5:34 ` Kuniyuki Iwashima
0 siblings, 0 replies; 11+ messages in thread
From: Kuniyuki Iwashima @ 2025-08-27 5:34 UTC (permalink / raw)
To: Eric Dumazet
Cc: David S . Miller, Jakub Kicinski, Paolo Abeni, Simon Horman,
netdev, eric.dumazet, Willem de Bruijn
On Tue, Aug 26, 2025 at 5:50 AM Eric Dumazet <edumazet@google.com> wrote:
>
> When a packet flood hits one or more RAW sockets, many cpus
> have to update sk->sk_drops.
>
> This slows down other cpus, because currently
> sk_drops is in sock_write_rx group.
>
> Add a socket_drop_counters structure to raw sockets.
>
> Using dedicated cache lines to hold drop counters
> makes sure that consumers no longer suffer from
> false sharing if/when producers only change sk->sk_drops.
>
> This adds 128 bytes per RAW socket.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Thank you, Eric !
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 net-next 0/5] net: better drop accounting
2025-08-26 12:50 [PATCH v2 net-next 0/5] net: better drop accounting Eric Dumazet
` (4 preceding siblings ...)
2025-08-26 12:50 ` [PATCH v2 net-next 5/5] inet: raw: add drop_counters to raw sockets Eric Dumazet
@ 2025-08-28 12:50 ` patchwork-bot+netdevbpf
5 siblings, 0 replies; 11+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-08-28 12:50 UTC (permalink / raw)
To: Eric Dumazet
Cc: davem, kuba, pabeni, horms, netdev, eric.dumazet, willemb, kuniyu
Hello:
This series was applied to netdev/net-next.git (main)
by Paolo Abeni <pabeni@redhat.com>:
On Tue, 26 Aug 2025 12:50:26 +0000 you wrote:
> Incrementing sk->sk_drops for every dropped packet can
> cause serious cache line contention under DOS.
>
> Add optional sk->sk_drop_counters pointer so that
> protocols can opt-in to use two dedicated cache lines
> to hold drop counters.
>
> [...]
Here is the summary with links:
- [v2,net-next,1/5] net: add sk_drops_read(), sk_drops_inc() and sk_drops_reset() helpers
https://git.kernel.org/netdev/net-next/c/f86f42ed2c47
- [v2,net-next,2/5] net: add sk_drops_skbadd() helper
https://git.kernel.org/netdev/net-next/c/cb4d5a6eb600
- [v2,net-next,3/5] net: add sk->sk_drop_counters
https://git.kernel.org/netdev/net-next/c/c51613fa276f
- [v2,net-next,4/5] udp: add drop_counters to udp socket
https://git.kernel.org/netdev/net-next/c/51132b99f01c
- [v2,net-next,5/5] inet: raw: add drop_counters to raw sockets
https://git.kernel.org/netdev/net-next/c/b81aa23234d9
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-08-28 12:50 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-26 12:50 [PATCH v2 net-next 0/5] net: better drop accounting Eric Dumazet
2025-08-26 12:50 ` [PATCH v2 net-next 1/5] net: add sk_drops_read(), sk_drops_inc() and sk_drops_reset() helpers Eric Dumazet
2025-08-26 12:50 ` [PATCH v2 net-next 2/5] net: add sk_drops_skbadd() helper Eric Dumazet
2025-08-27 5:28 ` Kuniyuki Iwashima
2025-08-26 12:50 ` [PATCH v2 net-next 3/5] net: add sk->sk_drop_counters Eric Dumazet
2025-08-27 5:30 ` Kuniyuki Iwashima
2025-08-26 12:50 ` [PATCH v2 net-next 4/5] udp: add drop_counters to udp socket Eric Dumazet
2025-08-27 5:32 ` Kuniyuki Iwashima
2025-08-26 12:50 ` [PATCH v2 net-next 5/5] inet: raw: add drop_counters to raw sockets Eric Dumazet
2025-08-27 5:34 ` Kuniyuki Iwashima
2025-08-28 12:50 ` [PATCH v2 net-next 0/5] net: better drop accounting patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).