* [PATCH v5 net-next 0/3] udp: Add 4-tuple hash for connected sockets
@ 2024-10-18 11:45 Philo Lu
2024-10-18 11:45 ` [PATCH v5 net-next 1/3] net/udp: Add a new struct for hash2 slot Philo Lu
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Philo Lu @ 2024-10-18 11:45 UTC (permalink / raw)
To: netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, pabeni, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
This patchset introduces 4-tuple hash for connected udp sockets, to make
connected udp lookup faster. Test using udpgso_bench_tx/rx shows no
obvious difference w/ and w/o this patchset for unconnected socket
receiving (see v4 thread).
Patch1: Add a new counter for hslot2 named hash4_cnt, to avoid cache line
miss when lookup.
Patch2 and 3: Implement 4-tuple hash for ipv4.
(That for ipv6 is in progress.)
The detailed motivation is described in Patch 3.
The 4-tuple hash increases the size of udp_sock and udp_hslot. Thus add it
with CONFIG_BASE_SMALL check, i.e., it's a no op with CONFIG_BASE_SMALL.
AFAICS the patchset can be further improved by:
(a) Better interact with hash2/reuseport. Now hash4 hardly affects other
mechanisms, but maintaining sockets in both hash4 and hash2 lists seems
unnecessary.
(b) Support early demux and ipv6.
changelogs:
v4 -> v5 (Paolo Abeni):
- Add CONFIG_BASE_SMALL with which udp hash4 does nothing
v3 -> v4 (Willem de Bruijn):
- fix mistakes in udp_pernet_table_alloc()
RFCv2 -> v3 (Gur Stavi):
- minor fix in udp_hashslot2() and udp_table_init()
- add rcu sync in rehash4()
RFCv1 -> RFCv2:
- add a new struct for hslot2
- remove the sockopt UDP_HASH4 because it has little side effect for
unconnected sockets
- add rehash in connect()
- re-organize the patch into 3 smaller ones
- other minor fix
v4:
https://lore.kernel.org/all/20241012012918.70888-1-lulie@linux.alibaba.com/
v3:
https://lore.kernel.org/all/20241010090351.79698-1-lulie@linux.alibaba.com/
RFCv2:
https://lore.kernel.org/all/20240924110414.52618-1-lulie@linux.alibaba.com/
RFCv1:
https://lore.kernel.org/all/20240913100941.8565-1-lulie@linux.alibaba.com/
Philo Lu (3):
net/udp: Add a new struct for hash2 slot
net/udp: Add 4-tuple hash list basis
ipv4/udp: Add 4-tuple hash for connected socket
include/linux/udp.h | 11 +++
include/net/udp.h | 111 +++++++++++++++++++++-
net/ipv4/udp.c | 217 +++++++++++++++++++++++++++++++++++++++-----
net/ipv6/udp.c | 17 ++--
4 files changed, 317 insertions(+), 39 deletions(-)
--
2.32.0.3.g01195cf9f
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v5 net-next 1/3] net/udp: Add a new struct for hash2 slot
2024-10-18 11:45 [PATCH v5 net-next 0/3] udp: Add 4-tuple hash for connected sockets Philo Lu
@ 2024-10-18 11:45 ` Philo Lu
2024-10-24 14:31 ` Paolo Abeni
2024-10-18 11:45 ` [PATCH v5 net-next 2/3] net/udp: Add 4-tuple hash list basis Philo Lu
2024-10-18 11:45 ` [PATCH v5 net-next 3/3] ipv4/udp: Add 4-tuple hash for connected socket Philo Lu
2 siblings, 1 reply; 11+ messages in thread
From: Philo Lu @ 2024-10-18 11:45 UTC (permalink / raw)
To: netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, pabeni, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
Preparing for udp 4-tuple hash (uhash4 for short).
To implement uhash4 without cache line missing when lookup, hslot2 is
used to record the number of hashed sockets in hslot4. Thus adding a new
struct udp_hslot_main with field hash4_cnt, which is used by hash2. The
new struct is used to avoid doubling the size of udp_hslot.
Before uhash4 lookup, firstly checking hash4_cnt to see if there are
hashed sks in hslot4. Because hslot2 is always used in lookup, there is
no cache line miss.
Related helpers are updated, and use the helpers as possible.
uhash4 is implemented in following patches.
Signed-off-by: Philo Lu <lulie@linux.alibaba.com>
---
include/net/udp.h | 23 +++++++++++++++++++----
net/ipv4/udp.c | 44 +++++++++++++++++++++++---------------------
net/ipv6/udp.c | 15 ++++++---------
3 files changed, 48 insertions(+), 34 deletions(-)
diff --git a/include/net/udp.h b/include/net/udp.h
index 61222545ab1c..cd2158618329 100644
--- a/include/net/udp.h
+++ b/include/net/udp.h
@@ -50,7 +50,7 @@ struct udp_skb_cb {
#define UDP_SKB_CB(__skb) ((struct udp_skb_cb *)((__skb)->cb))
/**
- * struct udp_hslot - UDP hash slot
+ * struct udp_hslot - UDP hash slot used by udp_table.hash
*
* @head: head of list of sockets
* @count: number of sockets in 'head' list
@@ -60,7 +60,21 @@ struct udp_hslot {
struct hlist_head head;
int count;
spinlock_t lock;
-} __attribute__((aligned(2 * sizeof(long))));
+} __aligned(2 * sizeof(long));
+
+/**
+ * struct udp_hslot_main - UDP hash slot used by udp_table.hash2
+ *
+ * @hslot: basic hash slot
+ * @hash4_cnt: number of sockets in hslot4 of the same (local port, local address)
+ */
+struct udp_hslot_main {
+ struct udp_hslot hslot; /* must be the first member */
+#if !IS_ENABLED(CONFIG_BASE_SMALL)
+ u32 hash4_cnt;
+#endif
+} __aligned(2 * sizeof(long));
+#define UDP_HSLOT_MAIN(__hslot) ((struct udp_hslot_main *)(__hslot))
/**
* struct udp_table - UDP table
@@ -72,7 +86,7 @@ struct udp_hslot {
*/
struct udp_table {
struct udp_hslot *hash;
- struct udp_hslot *hash2;
+ struct udp_hslot_main *hash2;
unsigned int mask;
unsigned int log;
};
@@ -84,6 +98,7 @@ static inline struct udp_hslot *udp_hashslot(struct udp_table *table,
{
return &table->hash[udp_hashfn(net, num, table->mask)];
}
+
/*
* For secondary hash, net_hash_mix() is performed before calling
* udp_hashslot2(), this explains difference with udp_hashslot()
@@ -91,7 +106,7 @@ static inline struct udp_hslot *udp_hashslot(struct udp_table *table,
static inline struct udp_hslot *udp_hashslot2(struct udp_table *table,
unsigned int hash)
{
- return &table->hash2[hash & table->mask];
+ return &table->hash2[hash & table->mask].hslot;
}
extern struct proto udp_prot;
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 4b74a25d0b6e..2b7e72c4ed6d 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -486,13 +486,12 @@ struct sock *__udp4_lib_lookup(const struct net *net, __be32 saddr,
int sdif, struct udp_table *udptable, struct sk_buff *skb)
{
unsigned short hnum = ntohs(dport);
- unsigned int hash2, slot2;
struct udp_hslot *hslot2;
struct sock *result, *sk;
+ unsigned int hash2;
hash2 = ipv4_portaddr_hash(net, daddr, hnum);
- slot2 = hash2 & udptable->mask;
- hslot2 = &udptable->hash2[slot2];
+ hslot2 = udp_hashslot2(udptable, hash2);
/* Lookup connected or non-wildcard socket */
result = udp4_lib_lookup2(net, saddr, sport,
@@ -519,8 +518,7 @@ struct sock *__udp4_lib_lookup(const struct net *net, __be32 saddr,
/* Lookup wildcard sockets */
hash2 = ipv4_portaddr_hash(net, htonl(INADDR_ANY), hnum);
- slot2 = hash2 & udptable->mask;
- hslot2 = &udptable->hash2[slot2];
+ hslot2 = udp_hashslot2(udptable, hash2);
result = udp4_lib_lookup2(net, saddr, sport,
htonl(INADDR_ANY), hnum, dif, sdif,
@@ -2266,7 +2264,7 @@ static int __udp4_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
udptable->mask;
hash2 = ipv4_portaddr_hash(net, daddr, hnum) & udptable->mask;
start_lookup:
- hslot = &udptable->hash2[hash2];
+ hslot = &udptable->hash2[hash2].hslot;
offset = offsetof(typeof(*sk), __sk_common.skc_portaddr_node);
}
@@ -2537,14 +2535,13 @@ static struct sock *__udp4_lib_demux_lookup(struct net *net,
struct udp_table *udptable = net->ipv4.udp_table;
INET_ADDR_COOKIE(acookie, rmt_addr, loc_addr);
unsigned short hnum = ntohs(loc_port);
- unsigned int hash2, slot2;
struct udp_hslot *hslot2;
+ unsigned int hash2;
__portpair ports;
struct sock *sk;
hash2 = ipv4_portaddr_hash(net, loc_addr, hnum);
- slot2 = hash2 & udptable->mask;
- hslot2 = &udptable->hash2[slot2];
+ hslot2 = udp_hashslot2(udptable, hash2);
ports = INET_COMBINED_PORTS(rmt_port, hnum);
udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) {
@@ -3185,7 +3182,7 @@ static struct sock *bpf_iter_udp_batch(struct seq_file *seq)
batch_sks = 0;
for (; state->bucket <= udptable->mask; state->bucket++) {
- struct udp_hslot *hslot2 = &udptable->hash2[state->bucket];
+ struct udp_hslot *hslot2 = &udptable->hash2[state->bucket].hslot;
if (hlist_empty(&hslot2->head))
continue;
@@ -3426,10 +3423,11 @@ __setup("uhash_entries=", set_uhash_entries);
void __init udp_table_init(struct udp_table *table, const char *name)
{
- unsigned int i;
+ unsigned int i, slot_size;
+ slot_size = sizeof(struct udp_hslot) + sizeof(struct udp_hslot_main);
table->hash = alloc_large_system_hash(name,
- 2 * sizeof(struct udp_hslot),
+ slot_size,
uhash_entries,
21, /* one slot per 2 MB */
0,
@@ -3438,16 +3436,17 @@ void __init udp_table_init(struct udp_table *table, const char *name)
UDP_HTABLE_SIZE_MIN,
UDP_HTABLE_SIZE_MAX);
- table->hash2 = table->hash + (table->mask + 1);
+ table->hash2 = (void *)(table->hash + (table->mask + 1));
for (i = 0; i <= table->mask; i++) {
INIT_HLIST_HEAD(&table->hash[i].head);
table->hash[i].count = 0;
spin_lock_init(&table->hash[i].lock);
}
for (i = 0; i <= table->mask; i++) {
- INIT_HLIST_HEAD(&table->hash2[i].head);
- table->hash2[i].count = 0;
- spin_lock_init(&table->hash2[i].lock);
+ INIT_HLIST_HEAD(&table->hash2[i].hslot.head);
+ table->hash2[i].hslot.count = 0;
+ spin_lock_init(&table->hash2[i].hslot.lock);
+ table->hash2[i].hash4_cnt = 0;
}
}
@@ -3474,18 +3473,20 @@ static void __net_init udp_sysctl_init(struct net *net)
static struct udp_table __net_init *udp_pernet_table_alloc(unsigned int hash_entries)
{
struct udp_table *udptable;
+ unsigned int slot_size;
int i;
udptable = kmalloc(sizeof(*udptable), GFP_KERNEL);
if (!udptable)
goto out;
- udptable->hash = vmalloc_huge(hash_entries * 2 * sizeof(struct udp_hslot),
+ slot_size = sizeof(struct udp_hslot) + sizeof(struct udp_hslot_main);
+ udptable->hash = vmalloc_huge(hash_entries * slot_size,
GFP_KERNEL_ACCOUNT);
if (!udptable->hash)
goto free_table;
- udptable->hash2 = udptable->hash + hash_entries;
+ udptable->hash2 = (void *)(udptable->hash + hash_entries);
udptable->mask = hash_entries - 1;
udptable->log = ilog2(hash_entries);
@@ -3494,9 +3495,10 @@ static struct udp_table __net_init *udp_pernet_table_alloc(unsigned int hash_ent
udptable->hash[i].count = 0;
spin_lock_init(&udptable->hash[i].lock);
- INIT_HLIST_HEAD(&udptable->hash2[i].head);
- udptable->hash2[i].count = 0;
- spin_lock_init(&udptable->hash2[i].lock);
+ INIT_HLIST_HEAD(&udptable->hash2[i].hslot.head);
+ udptable->hash2[i].hslot.count = 0;
+ spin_lock_init(&udptable->hash2[i].hslot.lock);
+ udptable->hash2[i].hash4_cnt = 0;
}
return udptable;
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 52dfbb2ff1a8..bbf3352213c4 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -224,13 +224,12 @@ struct sock *__udp6_lib_lookup(const struct net *net,
struct sk_buff *skb)
{
unsigned short hnum = ntohs(dport);
- unsigned int hash2, slot2;
struct udp_hslot *hslot2;
struct sock *result, *sk;
+ unsigned int hash2;
hash2 = ipv6_portaddr_hash(net, daddr, hnum);
- slot2 = hash2 & udptable->mask;
- hslot2 = &udptable->hash2[slot2];
+ hslot2 = udp_hashslot2(udptable, hash2);
/* Lookup connected or non-wildcard sockets */
result = udp6_lib_lookup2(net, saddr, sport,
@@ -257,8 +256,7 @@ struct sock *__udp6_lib_lookup(const struct net *net,
/* Lookup wildcard sockets */
hash2 = ipv6_portaddr_hash(net, &in6addr_any, hnum);
- slot2 = hash2 & udptable->mask;
- hslot2 = &udptable->hash2[slot2];
+ hslot2 = udp_hashslot2(udptable, hash2);
result = udp6_lib_lookup2(net, saddr, sport,
&in6addr_any, hnum, dif, sdif,
@@ -859,7 +857,7 @@ static int __udp6_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
udptable->mask;
hash2 = ipv6_portaddr_hash(net, daddr, hnum) & udptable->mask;
start_lookup:
- hslot = &udptable->hash2[hash2];
+ hslot = &udptable->hash2[hash2].hslot;
offset = offsetof(typeof(*sk), __sk_common.skc_portaddr_node);
}
@@ -1065,14 +1063,13 @@ static struct sock *__udp6_lib_demux_lookup(struct net *net,
{
struct udp_table *udptable = net->ipv4.udp_table;
unsigned short hnum = ntohs(loc_port);
- unsigned int hash2, slot2;
struct udp_hslot *hslot2;
+ unsigned int hash2;
__portpair ports;
struct sock *sk;
hash2 = ipv6_portaddr_hash(net, loc_addr, hnum);
- slot2 = hash2 & udptable->mask;
- hslot2 = &udptable->hash2[slot2];
+ hslot2 = udp_hashslot2(udptable, hash2);
ports = INET_COMBINED_PORTS(rmt_port, hnum);
udp_portaddr_for_each_entry_rcu(sk, &hslot2->head) {
--
2.32.0.3.g01195cf9f
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 net-next 2/3] net/udp: Add 4-tuple hash list basis
2024-10-18 11:45 [PATCH v5 net-next 0/3] udp: Add 4-tuple hash for connected sockets Philo Lu
2024-10-18 11:45 ` [PATCH v5 net-next 1/3] net/udp: Add a new struct for hash2 slot Philo Lu
@ 2024-10-18 11:45 ` Philo Lu
2024-10-18 11:45 ` [PATCH v5 net-next 3/3] ipv4/udp: Add 4-tuple hash for connected socket Philo Lu
2 siblings, 0 replies; 11+ messages in thread
From: Philo Lu @ 2024-10-18 11:45 UTC (permalink / raw)
To: netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, pabeni, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
Add a new hash list, hash4, in udp table. It will be used to implement
4-tuple hash for connected udp sockets. This patch adds the hlist to
table, and implements helpers and the initialization. 4-tuple hash is
implemented in the following patch.
Signed-off-by: Philo Lu <lulie@linux.alibaba.com>
Signed-off-by: Cambda Zhu <cambda@linux.alibaba.com>
Signed-off-by: Fred Chen <fred.cc@alibaba-inc.com>
Signed-off-by: Yubing Qiu <yubing.qiuyubing@alibaba-inc.com>
---
include/linux/udp.h | 11 ++++++
include/net/udp.h | 87 ++++++++++++++++++++++++++++++++++++++++++++-
net/ipv4/udp.c | 10 +++---
3 files changed, 103 insertions(+), 5 deletions(-)
diff --git a/include/linux/udp.h b/include/linux/udp.h
index 3eb3f2b9a2a0..09abd3700d81 100644
--- a/include/linux/udp.h
+++ b/include/linux/udp.h
@@ -56,6 +56,12 @@ struct udp_sock {
int pending; /* Any pending frames ? */
__u8 encap_type; /* Is this an Encapsulation socket? */
+#if !IS_ENABLED(CONFIG_BASE_SMALL)
+ /* For UDP 4-tuple hash */
+ __u16 udp_lrpa_hash;
+ struct hlist_node udp_lrpa_node;
+#endif
+
/*
* Following member retains the information to create a UDP header
* when the socket is uncorked.
@@ -206,6 +212,11 @@ static inline void udp_allow_gso(struct sock *sk)
#define udp_portaddr_for_each_entry_rcu(__sk, list) \
hlist_for_each_entry_rcu(__sk, list, __sk_common.skc_portaddr_node)
+#if !IS_ENABLED(CONFIG_BASE_SMALL)
+#define udp_lrpa_for_each_entry_rcu(__up, list) \
+ hlist_for_each_entry_rcu(__up, list, udp_lrpa_node)
+#endif
+
#define IS_UDPLITE(__sk) (__sk->sk_protocol == IPPROTO_UDPLITE)
#endif /* _LINUX_UDP_H */
diff --git a/include/net/udp.h b/include/net/udp.h
index cd2158618329..8aefdc404362 100644
--- a/include/net/udp.h
+++ b/include/net/udp.h
@@ -50,7 +50,7 @@ struct udp_skb_cb {
#define UDP_SKB_CB(__skb) ((struct udp_skb_cb *)((__skb)->cb))
/**
- * struct udp_hslot - UDP hash slot used by udp_table.hash
+ * struct udp_hslot - UDP hash slot used by udp_table.hash/hash4
*
* @head: head of list of sockets
* @count: number of sockets in 'head' list
@@ -81,12 +81,17 @@ struct udp_hslot_main {
*
* @hash: hash table, sockets are hashed on (local port)
* @hash2: hash table, sockets are hashed on (local port, local address)
+ * @hash4: hash table, connected sockets are hashed on
+ * (local port, local address, remote port, remote address)
* @mask: number of slots in hash tables, minus 1
* @log: log2(number of slots in hash table)
*/
struct udp_table {
struct udp_hslot *hash;
struct udp_hslot_main *hash2;
+#if !IS_ENABLED(CONFIG_BASE_SMALL)
+ struct udp_hslot *hash4;
+#endif
unsigned int mask;
unsigned int log;
};
@@ -109,6 +114,86 @@ static inline struct udp_hslot *udp_hashslot2(struct udp_table *table,
return &table->hash2[hash & table->mask].hslot;
}
+#if IS_ENABLED(CONFIG_BASE_SMALL)
+static inline struct udp_hslot *udp_hashslot4(struct udp_table *table,
+ unsigned int hash)
+{
+ BUILD_BUG();
+ return NULL;
+}
+
+static inline bool udp_hashed4(const struct sock *sk)
+{
+ return false;
+}
+
+static inline unsigned int udp_hash4_slot_size(void)
+{
+ return 0;
+}
+
+static inline void udp_table_hash4_init(struct udp_table *table)
+{
+}
+
+static inline bool udp_has_hash4(const struct udp_hslot *hslot2)
+{
+ return false;
+}
+
+static inline void udp_hash4_inc(struct udp_hslot *hslot2)
+{
+}
+
+static inline void udp_hash4_dec(struct udp_hslot *hslot2)
+{
+}
+#else /* !CONFIG_BASE_SMALL */
+static inline struct udp_hslot *udp_hashslot4(struct udp_table *table,
+ unsigned int hash)
+{
+ return &table->hash4[hash & table->mask];
+}
+
+static inline bool udp_hashed4(const struct sock *sk)
+{
+ return !hlist_unhashed(&udp_sk(sk)->udp_lrpa_node);
+}
+
+static inline unsigned int udp_hash4_slot_size(void)
+{
+ return sizeof(struct udp_hslot);
+}
+
+/* Must be called with table->hash2 initialized */
+static inline void udp_table_hash4_init(struct udp_table *table)
+{
+ table->hash4 = (void *)(table->hash2 + (table->mask + 1));
+ for (int i = 0; i <= table->mask; i++) {
+ table->hash2[i].hash4_cnt = 0;
+
+ INIT_HLIST_HEAD(&table->hash4[i].head);
+ table->hash4[i].count = 0;
+ spin_lock_init(&table->hash4[i].lock);
+ }
+}
+
+static inline bool udp_has_hash4(const struct udp_hslot *hslot2)
+{
+ return UDP_HSLOT_MAIN(hslot2)->hash4_cnt;
+}
+
+static inline void udp_hash4_inc(struct udp_hslot *hslot2)
+{
+ UDP_HSLOT_MAIN(hslot2)->hash4_cnt++;
+}
+
+static inline void udp_hash4_dec(struct udp_hslot *hslot2)
+{
+ UDP_HSLOT_MAIN(hslot2)->hash4_cnt--;
+}
+#endif /* CONFIG_BASE_SMALL */
+
extern struct proto udp_prot;
extern atomic_long_t udp_memory_allocated;
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 2b7e72c4ed6d..74bfab0f44f8 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -3425,7 +3425,8 @@ void __init udp_table_init(struct udp_table *table, const char *name)
{
unsigned int i, slot_size;
- slot_size = sizeof(struct udp_hslot) + sizeof(struct udp_hslot_main);
+ slot_size = sizeof(struct udp_hslot) + sizeof(struct udp_hslot_main) +
+ udp_hash4_slot_size();
table->hash = alloc_large_system_hash(name,
slot_size,
uhash_entries,
@@ -3446,8 +3447,8 @@ void __init udp_table_init(struct udp_table *table, const char *name)
INIT_HLIST_HEAD(&table->hash2[i].hslot.head);
table->hash2[i].hslot.count = 0;
spin_lock_init(&table->hash2[i].hslot.lock);
- table->hash2[i].hash4_cnt = 0;
}
+ udp_table_hash4_init(table);
}
u32 udp_flow_hashrnd(void)
@@ -3480,7 +3481,8 @@ static struct udp_table __net_init *udp_pernet_table_alloc(unsigned int hash_ent
if (!udptable)
goto out;
- slot_size = sizeof(struct udp_hslot) + sizeof(struct udp_hslot_main);
+ slot_size = sizeof(struct udp_hslot) + sizeof(struct udp_hslot_main) +
+ udp_hash4_slot_size();
udptable->hash = vmalloc_huge(hash_entries * slot_size,
GFP_KERNEL_ACCOUNT);
if (!udptable->hash)
@@ -3498,8 +3500,8 @@ static struct udp_table __net_init *udp_pernet_table_alloc(unsigned int hash_ent
INIT_HLIST_HEAD(&udptable->hash2[i].hslot.head);
udptable->hash2[i].hslot.count = 0;
spin_lock_init(&udptable->hash2[i].hslot.lock);
- udptable->hash2[i].hash4_cnt = 0;
}
+ udp_table_hash4_init(udptable);
return udptable;
--
2.32.0.3.g01195cf9f
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 net-next 3/3] ipv4/udp: Add 4-tuple hash for connected socket
2024-10-18 11:45 [PATCH v5 net-next 0/3] udp: Add 4-tuple hash for connected sockets Philo Lu
2024-10-18 11:45 ` [PATCH v5 net-next 1/3] net/udp: Add a new struct for hash2 slot Philo Lu
2024-10-18 11:45 ` [PATCH v5 net-next 2/3] net/udp: Add 4-tuple hash list basis Philo Lu
@ 2024-10-18 11:45 ` Philo Lu
2024-10-24 15:01 ` Paolo Abeni
2 siblings, 1 reply; 11+ messages in thread
From: Philo Lu @ 2024-10-18 11:45 UTC (permalink / raw)
To: netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, pabeni, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
Currently, the udp_table has two hash table, the port hash and portaddr
hash. Usually for UDP servers, all sockets have the same local port and
addr, so they are all on the same hash slot within a reuseport group.
In some applications, UDP servers use connect() to manage clients. In
particular, when firstly receiving from an unseen 4 tuple, a new socket
is created and connect()ed to the remote addr:port, and then the fd is
used exclusively by the client.
Once there are connected sks in a reuseport group, udp has to score all
sks in the same hash2 slot to find the best match. This could be
inefficient with a large number of connections, resulting in high
softirq overhead.
To solve the problem, this patch implement 4-tuple hash for connected
udp sockets. During connect(), hash4 slot is updated, as well as a
corresponding counter, hash4_cnt, in hslot2. In __udp4_lib_lookup(),
hslot4 will be searched firstly if the counter is non-zero. Otherwise,
hslot2 is used like before. Note that only connected sockets enter this
hash4 path, while un-connected ones are not affected.
Signed-off-by: Philo Lu <lulie@linux.alibaba.com>
Signed-off-by: Cambda Zhu <cambda@linux.alibaba.com>
Signed-off-by: Fred Chen <fred.cc@alibaba-inc.com>
Signed-off-by: Yubing Qiu <yubing.qiuyubing@alibaba-inc.com>
---
include/net/udp.h | 3 +-
net/ipv4/udp.c | 171 +++++++++++++++++++++++++++++++++++++++++++++-
net/ipv6/udp.c | 2 +-
3 files changed, 171 insertions(+), 5 deletions(-)
diff --git a/include/net/udp.h b/include/net/udp.h
index 8aefdc404362..97c5ae83723c 100644
--- a/include/net/udp.h
+++ b/include/net/udp.h
@@ -293,7 +293,7 @@ static inline int udp_lib_hash(struct sock *sk)
}
void udp_lib_unhash(struct sock *sk);
-void udp_lib_rehash(struct sock *sk, u16 new_hash);
+void udp_lib_rehash(struct sock *sk, u16 new_hash, u16 new_hash4);
static inline void udp_lib_close(struct sock *sk, long timeout)
{
@@ -386,6 +386,7 @@ int udp_rcv(struct sk_buff *skb);
int udp_ioctl(struct sock *sk, int cmd, int *karg);
int udp_init_sock(struct sock *sk);
int udp_pre_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len);
+int udp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len);
int __udp_disconnect(struct sock *sk, int flags);
int udp_disconnect(struct sock *sk, int flags);
__poll_t udp_poll(struct file *file, struct socket *sock, poll_table *wait);
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 74bfab0f44f8..5d944cec7a27 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -478,6 +478,134 @@ static struct sock *udp4_lib_lookup2(const struct net *net,
return result;
}
+#if IS_ENABLED(CONFIG_BASE_SMALL)
+static struct sock *udp4_lib_lookup4(const struct net *net,
+ __be32 saddr, __be16 sport,
+ __be32 daddr, unsigned int hnum,
+ int dif, int sdif,
+ struct udp_table *udptable)
+{
+ return NULL;
+}
+
+static void udp4_rehash4(struct udp_table *udptable, struct sock *sk, u16 newhash4)
+{
+}
+
+static void udp4_unhash4(struct udp_table *udptable, struct sock *sk)
+{
+}
+
+static void udp4_hash4(struct sock *sk)
+{
+}
+#else /* !CONFIG_BASE_SMALL */
+static struct sock *udp4_lib_lookup4(const struct net *net,
+ __be32 saddr, __be16 sport,
+ __be32 daddr, unsigned int hnum,
+ int dif, int sdif,
+ struct udp_table *udptable)
+{
+ unsigned int hash4 = udp_ehashfn(net, daddr, hnum, saddr, sport);
+ const __portpair ports = INET_COMBINED_PORTS(sport, hnum);
+ struct udp_hslot *hslot4 = udp_hashslot4(udptable, hash4);
+ struct udp_sock *up;
+ struct sock *sk;
+
+ INET_ADDR_COOKIE(acookie, saddr, daddr);
+ udp_lrpa_for_each_entry_rcu(up, &hslot4->head) {
+ sk = (struct sock *)up;
+ if (inet_match(net, sk, acookie, ports, dif, sdif))
+ return sk;
+ }
+ return NULL;
+}
+
+/* In hash4, rehash can also happen in connect(), where hash4_cnt keeps unchanged. */
+static void udp4_rehash4(struct udp_table *udptable, struct sock *sk, u16 newhash4)
+{
+ struct udp_hslot *hslot4, *nhslot4;
+
+ hslot4 = udp_hashslot4(udptable, udp_sk(sk)->udp_lrpa_hash);
+ nhslot4 = udp_hashslot4(udptable, newhash4);
+ udp_sk(sk)->udp_lrpa_hash = newhash4;
+
+ if (hslot4 != nhslot4) {
+ spin_lock_bh(&hslot4->lock);
+ hlist_del_init_rcu(&udp_sk(sk)->udp_lrpa_node);
+ hslot4->count--;
+ spin_unlock_bh(&hslot4->lock);
+
+ synchronize_rcu();
+
+ spin_lock_bh(&nhslot4->lock);
+ hlist_add_head_rcu(&udp_sk(sk)->udp_lrpa_node, &nhslot4->head);
+ nhslot4->count++;
+ spin_unlock_bh(&nhslot4->lock);
+ }
+}
+
+static void udp4_unhash4(struct udp_table *udptable, struct sock *sk)
+{
+ struct udp_hslot *hslot2, *hslot4;
+
+ if (udp_hashed4(sk)) {
+ hslot2 = udp_hashslot2(udptable, udp_sk(sk)->udp_portaddr_hash);
+ hslot4 = udp_hashslot4(udptable, udp_sk(sk)->udp_lrpa_hash);
+
+ spin_lock(&hslot4->lock);
+ hlist_del_init_rcu(&udp_sk(sk)->udp_lrpa_node);
+ hslot4->count--;
+ spin_unlock(&hslot4->lock);
+
+ spin_lock(&hslot2->lock);
+ udp_hash4_dec(hslot2);
+ spin_unlock(&hslot2->lock);
+ }
+}
+
+/* call with sock lock */
+static void udp4_hash4(struct sock *sk)
+{
+ struct udp_hslot *hslot, *hslot2, *hslot4;
+ struct net *net = sock_net(sk);
+ struct udp_table *udptable;
+ unsigned int hash;
+
+ if (sk_unhashed(sk) || inet_sk(sk)->inet_rcv_saddr == htonl(INADDR_ANY))
+ return;
+
+ hash = udp_ehashfn(net, inet_sk(sk)->inet_rcv_saddr, inet_sk(sk)->inet_num,
+ inet_sk(sk)->inet_daddr, inet_sk(sk)->inet_dport);
+
+ udptable = net->ipv4.udp_table;
+ if (udp_hashed4(sk)) {
+ udp4_rehash4(udptable, sk, hash);
+ return;
+ }
+
+ hslot = udp_hashslot(udptable, net, udp_sk(sk)->udp_port_hash);
+ hslot2 = udp_hashslot2(udptable, udp_sk(sk)->udp_portaddr_hash);
+ hslot4 = udp_hashslot4(udptable, hash);
+ udp_sk(sk)->udp_lrpa_hash = hash;
+
+ spin_lock_bh(&hslot->lock);
+ if (rcu_access_pointer(sk->sk_reuseport_cb))
+ reuseport_detach_sock(sk);
+
+ spin_lock(&hslot4->lock);
+ hlist_add_head_rcu(&udp_sk(sk)->udp_lrpa_node, &hslot4->head);
+ hslot4->count++;
+ spin_unlock(&hslot4->lock);
+
+ spin_lock(&hslot2->lock);
+ udp_hash4_inc(hslot2);
+ spin_unlock(&hslot2->lock);
+
+ spin_unlock_bh(&hslot->lock);
+}
+#endif /* CONFIG_BASE_SMALL */
+
/* UDP is nearly always wildcards out the wazoo, it makes no sense to try
* harder than this. -DaveM
*/
@@ -493,6 +621,12 @@ struct sock *__udp4_lib_lookup(const struct net *net, __be32 saddr,
hash2 = ipv4_portaddr_hash(net, daddr, hnum);
hslot2 = udp_hashslot2(udptable, hash2);
+ if (udp_has_hash4(hslot2)) {
+ result = udp4_lib_lookup4(net, saddr, sport, daddr, hnum, dif, sdif, udptable);
+ if (result) /* udp4_lib_lookup4 return sk or NULL */
+ return result;
+ }
+
/* Lookup connected or non-wildcard socket */
result = udp4_lib_lookup2(net, saddr, sport,
daddr, hnum, dif, sdif,
@@ -1931,6 +2065,19 @@ int udp_pre_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
}
EXPORT_SYMBOL(udp_pre_connect);
+int udp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+{
+ int res;
+
+ lock_sock(sk);
+ res = __ip4_datagram_connect(sk, uaddr, addr_len);
+ if (!res)
+ udp4_hash4(sk);
+ release_sock(sk);
+ return res;
+}
+EXPORT_SYMBOL(udp_connect);
+
int __udp_disconnect(struct sock *sk, int flags)
{
struct inet_sock *inet = inet_sk(sk);
@@ -1990,6 +2137,8 @@ void udp_lib_unhash(struct sock *sk)
hlist_del_init_rcu(&udp_sk(sk)->udp_portaddr_node);
hslot2->count--;
spin_unlock(&hslot2->lock);
+
+ udp4_unhash4(udptable, sk);
}
spin_unlock_bh(&hslot->lock);
}
@@ -1999,7 +2148,7 @@ EXPORT_SYMBOL(udp_lib_unhash);
/*
* inet_rcv_saddr was changed, we must rehash secondary hash
*/
-void udp_lib_rehash(struct sock *sk, u16 newhash)
+void udp_lib_rehash(struct sock *sk, u16 newhash, u16 newhash4)
{
if (sk_hashed(sk)) {
struct udp_table *udptable = udp_get_table_prot(sk);
@@ -2031,6 +2180,19 @@ void udp_lib_rehash(struct sock *sk, u16 newhash)
spin_unlock(&nhslot2->lock);
}
+ if (udp_hashed4(sk)) {
+ udp4_rehash4(udptable, sk, newhash4);
+
+ if (hslot2 != nhslot2) {
+ spin_lock(&hslot2->lock);
+ udp_hash4_dec(hslot2);
+ spin_unlock(&hslot2->lock);
+
+ spin_lock(&nhslot2->lock);
+ udp_hash4_inc(nhslot2);
+ spin_unlock(&nhslot2->lock);
+ }
+ }
spin_unlock_bh(&hslot->lock);
}
}
@@ -2042,7 +2204,10 @@ void udp_v4_rehash(struct sock *sk)
u16 new_hash = ipv4_portaddr_hash(sock_net(sk),
inet_sk(sk)->inet_rcv_saddr,
inet_sk(sk)->inet_num);
- udp_lib_rehash(sk, new_hash);
+ u16 new_hash4 = udp_ehashfn(sock_net(sk),
+ inet_sk(sk)->inet_rcv_saddr, inet_sk(sk)->inet_num,
+ inet_sk(sk)->inet_daddr, inet_sk(sk)->inet_dport);
+ udp_lib_rehash(sk, new_hash, new_hash4);
}
static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
@@ -2935,7 +3100,7 @@ struct proto udp_prot = {
.owner = THIS_MODULE,
.close = udp_lib_close,
.pre_connect = udp_pre_connect,
- .connect = ip4_datagram_connect,
+ .connect = udp_connect,
.disconnect = udp_disconnect,
.ioctl = udp_ioctl,
.init = udp_init_sock,
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index bbf3352213c4..4d3dfcb48a39 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -111,7 +111,7 @@ void udp_v6_rehash(struct sock *sk)
&sk->sk_v6_rcv_saddr,
inet_sk(sk)->inet_num);
- udp_lib_rehash(sk, new_hash);
+ udp_lib_rehash(sk, new_hash, 0); /* 4-tuple hash not implemented */
}
static int compute_score(struct sock *sk, const struct net *net,
--
2.32.0.3.g01195cf9f
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v5 net-next 1/3] net/udp: Add a new struct for hash2 slot
2024-10-18 11:45 ` [PATCH v5 net-next 1/3] net/udp: Add a new struct for hash2 slot Philo Lu
@ 2024-10-24 14:31 ` Paolo Abeni
2024-10-24 14:38 ` Paolo Abeni
0 siblings, 1 reply; 11+ messages in thread
From: Paolo Abeni @ 2024-10-24 14:31 UTC (permalink / raw)
To: Philo Lu, netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
On 10/18/24 13:45, Philo Lu wrote:
> @@ -3438,16 +3436,17 @@ void __init udp_table_init(struct udp_table *table, const char *name)
> UDP_HTABLE_SIZE_MIN,
> UDP_HTABLE_SIZE_MAX);
>
> - table->hash2 = table->hash + (table->mask + 1);
> + table->hash2 = (void *)(table->hash + (table->mask + 1));
> for (i = 0; i <= table->mask; i++) {
> INIT_HLIST_HEAD(&table->hash[i].head);
> table->hash[i].count = 0;
> spin_lock_init(&table->hash[i].lock);
> }
> for (i = 0; i <= table->mask; i++) {
> - INIT_HLIST_HEAD(&table->hash2[i].head);
> - table->hash2[i].count = 0;
> - spin_lock_init(&table->hash2[i].lock);
> + INIT_HLIST_HEAD(&table->hash2[i].hslot.head);
> + table->hash2[i].hslot.count = 0;
> + spin_lock_init(&table->hash2[i].hslot.lock);
> + table->hash2[i].hash4_cnt = 0;
Does not build for CONFIG_BASE_SMALL=y kernels. You need to put all the
hash4_cnt access under compiler guards.
Cheers,
Paolo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 net-next 1/3] net/udp: Add a new struct for hash2 slot
2024-10-24 14:31 ` Paolo Abeni
@ 2024-10-24 14:38 ` Paolo Abeni
0 siblings, 0 replies; 11+ messages in thread
From: Paolo Abeni @ 2024-10-24 14:38 UTC (permalink / raw)
To: Philo Lu, netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
On 10/24/24 16:31, Paolo Abeni wrote:
> On 10/18/24 13:45, Philo Lu wrote:
>> @@ -3438,16 +3436,17 @@ void __init udp_table_init(struct udp_table *table, const char *name)
>> UDP_HTABLE_SIZE_MIN,
>> UDP_HTABLE_SIZE_MAX);
>>
>> - table->hash2 = table->hash + (table->mask + 1);
>> + table->hash2 = (void *)(table->hash + (table->mask + 1));
>> for (i = 0; i <= table->mask; i++) {
>> INIT_HLIST_HEAD(&table->hash[i].head);
>> table->hash[i].count = 0;
>> spin_lock_init(&table->hash[i].lock);
>> }
>> for (i = 0; i <= table->mask; i++) {
>> - INIT_HLIST_HEAD(&table->hash2[i].head);
>> - table->hash2[i].count = 0;
>> - spin_lock_init(&table->hash2[i].lock);
>> + INIT_HLIST_HEAD(&table->hash2[i].hslot.head);
>> + table->hash2[i].hslot.count = 0;
>> + spin_lock_init(&table->hash2[i].hslot.lock);
>> + table->hash2[i].hash4_cnt = 0;
>
> Does not build for CONFIG_BASE_SMALL=y kernels. You need to put all the
> hash4_cnt access under compiler guards.
I see now it's fixed in the next patch, but it's not good enough: we
want to preserve bisectability. You should introduce
udp_table_hash4_init() in this patch.
Thanks,
Paolo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 net-next 3/3] ipv4/udp: Add 4-tuple hash for connected socket
2024-10-18 11:45 ` [PATCH v5 net-next 3/3] ipv4/udp: Add 4-tuple hash for connected socket Philo Lu
@ 2024-10-24 15:01 ` Paolo Abeni
2024-10-24 15:04 ` Paolo Abeni
2024-10-25 3:50 ` Philo Lu
0 siblings, 2 replies; 11+ messages in thread
From: Paolo Abeni @ 2024-10-24 15:01 UTC (permalink / raw)
To: Philo Lu, netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
On 10/18/24 13:45, Philo Lu wrote:
[...]
> +/* In hash4, rehash can also happen in connect(), where hash4_cnt keeps unchanged. */
> +static void udp4_rehash4(struct udp_table *udptable, struct sock *sk, u16 newhash4)
> +{
> + struct udp_hslot *hslot4, *nhslot4;
> +
> + hslot4 = udp_hashslot4(udptable, udp_sk(sk)->udp_lrpa_hash);
> + nhslot4 = udp_hashslot4(udptable, newhash4);
> + udp_sk(sk)->udp_lrpa_hash = newhash4;
> +
> + if (hslot4 != nhslot4) {
> + spin_lock_bh(&hslot4->lock);
> + hlist_del_init_rcu(&udp_sk(sk)->udp_lrpa_node);
> + hslot4->count--;
> + spin_unlock_bh(&hslot4->lock);
> +
> + synchronize_rcu();
This deserve a comment explaining why it's needed. I had to dig in past
revision to understand it.
> +
> + spin_lock_bh(&nhslot4->lock);
> + hlist_add_head_rcu(&udp_sk(sk)->udp_lrpa_node, &nhslot4->head);
> + nhslot4->count++;
> + spin_unlock_bh(&nhslot4->lock);
> + }
> +}
> +
> +static void udp4_unhash4(struct udp_table *udptable, struct sock *sk)
> +{
> + struct udp_hslot *hslot2, *hslot4;
> +
> + if (udp_hashed4(sk)) {
> + hslot2 = udp_hashslot2(udptable, udp_sk(sk)->udp_portaddr_hash);
> + hslot4 = udp_hashslot4(udptable, udp_sk(sk)->udp_lrpa_hash);
> +
> + spin_lock(&hslot4->lock);
> + hlist_del_init_rcu(&udp_sk(sk)->udp_lrpa_node);
> + hslot4->count--;
> + spin_unlock(&hslot4->lock);
> +
> + spin_lock(&hslot2->lock);
> + udp_hash4_dec(hslot2);
> + spin_unlock(&hslot2->lock);
> + }
> +}
> +
> +/* call with sock lock */
> +static void udp4_hash4(struct sock *sk)
> +{
> + struct udp_hslot *hslot, *hslot2, *hslot4;
> + struct net *net = sock_net(sk);
> + struct udp_table *udptable;
> + unsigned int hash;
> +
> + if (sk_unhashed(sk) || inet_sk(sk)->inet_rcv_saddr == htonl(INADDR_ANY))
> + return;
> +
> + hash = udp_ehashfn(net, inet_sk(sk)->inet_rcv_saddr, inet_sk(sk)->inet_num,
> + inet_sk(sk)->inet_daddr, inet_sk(sk)->inet_dport);
> +
> + udptable = net->ipv4.udp_table;
> + if (udp_hashed4(sk)) {
> + udp4_rehash4(udptable, sk, hash);
It's unclear to me how we can enter this branch. Also it's unclear why
here you don't need to call udp_hash4_inc()udp_hash4_dec, too. Why such
accounting can't be placed in udp4_rehash4()?
[...]
> @@ -2031,6 +2180,19 @@ void udp_lib_rehash(struct sock *sk, u16 newhash)
> spin_unlock(&nhslot2->lock);
> }
>
> + if (udp_hashed4(sk)) {
> + udp4_rehash4(udptable, sk, newhash4);
> +
> + if (hslot2 != nhslot2) {
> + spin_lock(&hslot2->lock);
> + udp_hash4_dec(hslot2);
> + spin_unlock(&hslot2->lock);
> +
> + spin_lock(&nhslot2->lock);
> + udp_hash4_inc(nhslot2);
> + spin_unlock(&nhslot2->lock);
> + }
> + }
> spin_unlock_bh(&hslot->lock);
The udp4_rehash4() call above is in atomic context and could end-up
calling synchronize_rcu() which is a blocking function. You must avoid that.
Cheers,
Paolo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 net-next 3/3] ipv4/udp: Add 4-tuple hash for connected socket
2024-10-24 15:01 ` Paolo Abeni
@ 2024-10-24 15:04 ` Paolo Abeni
2024-10-25 3:50 ` Philo Lu
1 sibling, 0 replies; 11+ messages in thread
From: Paolo Abeni @ 2024-10-24 15:04 UTC (permalink / raw)
To: Philo Lu, netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
On 10/24/24 17:01, Paolo Abeni wrote:
> The udp4_rehash4() call above is in atomic context and could end-up
> calling synchronize_rcu() which is a blocking function. You must avoid that.
I almost forgot: please include in this commit message or in the cover
letter, the performance figures for unconnected sockets before and after
this series and for a stress test with a lot of connected sockets,
before and after this series.
Thanks,
Paolo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 net-next 3/3] ipv4/udp: Add 4-tuple hash for connected socket
2024-10-24 15:01 ` Paolo Abeni
2024-10-24 15:04 ` Paolo Abeni
@ 2024-10-25 3:50 ` Philo Lu
2024-10-25 9:02 ` Paolo Abeni
1 sibling, 1 reply; 11+ messages in thread
From: Philo Lu @ 2024-10-25 3:50 UTC (permalink / raw)
To: Paolo Abeni, netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
On 2024/10/24 23:01, Paolo Abeni wrote:
> On 10/18/24 13:45, Philo Lu wrote:
> [...]
>> +/* In hash4, rehash can also happen in connect(), where hash4_cnt keeps unchanged. */
>> +static void udp4_rehash4(struct udp_table *udptable, struct sock *sk, u16 newhash4)
>> +{
>> + struct udp_hslot *hslot4, *nhslot4;
>> +
>> + hslot4 = udp_hashslot4(udptable, udp_sk(sk)->udp_lrpa_hash);
>> + nhslot4 = udp_hashslot4(udptable, newhash4);
>> + udp_sk(sk)->udp_lrpa_hash = newhash4;
>> +
>> + if (hslot4 != nhslot4) {
>> + spin_lock_bh(&hslot4->lock);
>> + hlist_del_init_rcu(&udp_sk(sk)->udp_lrpa_node);
>> + hslot4->count--;
>> + spin_unlock_bh(&hslot4->lock);
>> +
>> + synchronize_rcu();
>
> This deserve a comment explaining why it's needed. I had to dig in past
> revision to understand it.
>
Got it. And a short explanation here (see [1] for detail):
Here, we move a node from a hlist to another new one, i.e., update
node->next from the old hlist to the new hlist. For readers traversing
the old hlist, if we update node->next just when readers move onto the
moved node, then the readers also move to the new hlist. This is unexpected.
Reader(lookup) Writer(rehash)
----------------- ---------------
1. rcu_read_lock()
2. pos = sk;
3. hlist_del_init_rcu(sk, old_slot)
4. hlist_add_head_rcu(sk, new_slot)
5. pos = pos->next; <=
6. rcu_read_unlock()
[1]
https://lore.kernel.org/all/0fb425e0-5482-4cdf-9dc1-3906751f8f81@linux.alibaba.com/
>> +
>> + spin_lock_bh(&nhslot4->lock);
>> + hlist_add_head_rcu(&udp_sk(sk)->udp_lrpa_node, &nhslot4->head);
>> + nhslot4->count++;
>> + spin_unlock_bh(&nhslot4->lock);
>> + }
>> +}
>> +
>> +static void udp4_unhash4(struct udp_table *udptable, struct sock *sk)
>> +{
>> + struct udp_hslot *hslot2, *hslot4;
>> +
>> + if (udp_hashed4(sk)) {
>> + hslot2 = udp_hashslot2(udptable, udp_sk(sk)->udp_portaddr_hash);
>> + hslot4 = udp_hashslot4(udptable, udp_sk(sk)->udp_lrpa_hash);
>> +
>> + spin_lock(&hslot4->lock);
>> + hlist_del_init_rcu(&udp_sk(sk)->udp_lrpa_node);
>> + hslot4->count--;
>> + spin_unlock(&hslot4->lock);
>> +
>> + spin_lock(&hslot2->lock);
>> + udp_hash4_dec(hslot2);
>> + spin_unlock(&hslot2->lock);
>> + }
>> +}
>> +
>> +/* call with sock lock */
>> +static void udp4_hash4(struct sock *sk)
>> +{
>> + struct udp_hslot *hslot, *hslot2, *hslot4;
>> + struct net *net = sock_net(sk);
>> + struct udp_table *udptable;
>> + unsigned int hash;
>> +
>> + if (sk_unhashed(sk) || inet_sk(sk)->inet_rcv_saddr == htonl(INADDR_ANY))
>> + return;
>> +
>> + hash = udp_ehashfn(net, inet_sk(sk)->inet_rcv_saddr, inet_sk(sk)->inet_num,
>> + inet_sk(sk)->inet_daddr, inet_sk(sk)->inet_dport);
>> +
>> + udptable = net->ipv4.udp_table;
>> + if (udp_hashed4(sk)) {
>> + udp4_rehash4(udptable, sk, hash);
>
> It's unclear to me how we can enter this branch. Also it's unclear why
> here you don't need to call udp_hash4_inc()udp_hash4_dec, too. Why such
> accounting can't be placed in udp4_rehash4()?
>
It's possible that a connected udp socket _re-connect_ to another remote
address. Then, because the local address is not changed, hash2 and its
hash4_cnt keep unchanged. But rehash4 need to be done.
I'll also add a comment here.
> [...]
>> @@ -2031,6 +2180,19 @@ void udp_lib_rehash(struct sock *sk, u16 newhash)
>> spin_unlock(&nhslot2->lock);
>> }
>>
>> + if (udp_hashed4(sk)) {
>> + udp4_rehash4(udptable, sk, newhash4);
>> +
>> + if (hslot2 != nhslot2) {
>> + spin_lock(&hslot2->lock);
>> + udp_hash4_dec(hslot2);
>> + spin_unlock(&hslot2->lock);
>> +
>> + spin_lock(&nhslot2->lock);
>> + udp_hash4_inc(nhslot2);
>> + spin_unlock(&nhslot2->lock);
>> + }
>> + }
>> spin_unlock_bh(&hslot->lock);
>
> The udp4_rehash4() call above is in atomic context and could end-up
> calling synchronize_rcu() which is a blocking function. You must avoid that.
>
I see, synchronize_rcu() cannot be used with spinlock. However, I don't
have a good idea to solve it by now. Do you have any thoughts or
suggestions?
> Cheers,
>
> Paolo
Thanks for your reviewing, Paolo. I'll address all your concerns in the
next version.
--
Philo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 net-next 3/3] ipv4/udp: Add 4-tuple hash for connected socket
2024-10-25 3:50 ` Philo Lu
@ 2024-10-25 9:02 ` Paolo Abeni
2024-10-26 1:39 ` Philo Lu
0 siblings, 1 reply; 11+ messages in thread
From: Paolo Abeni @ 2024-10-25 9:02 UTC (permalink / raw)
To: Philo Lu, netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
On 10/25/24 05:50, Philo Lu wrote:
> On 2024/10/24 23:01, Paolo Abeni wrote:
>> On 10/18/24 13:45, Philo Lu wrote:
>> [...]
>>> +/* In hash4, rehash can also happen in connect(), where hash4_cnt keeps unchanged. */
>>> +static void udp4_rehash4(struct udp_table *udptable, struct sock *sk, u16 newhash4)
>>> +{
>>> + struct udp_hslot *hslot4, *nhslot4;
>>> +
>>> + hslot4 = udp_hashslot4(udptable, udp_sk(sk)->udp_lrpa_hash);
>>> + nhslot4 = udp_hashslot4(udptable, newhash4);
>>> + udp_sk(sk)->udp_lrpa_hash = newhash4;
>>> +
>>> + if (hslot4 != nhslot4) {
>>> + spin_lock_bh(&hslot4->lock);
>>> + hlist_del_init_rcu(&udp_sk(sk)->udp_lrpa_node);
>>> + hslot4->count--;
>>> + spin_unlock_bh(&hslot4->lock);
>>> +
>>> + synchronize_rcu();
>>
>> This deserve a comment explaining why it's needed. I had to dig in past
>> revision to understand it.
>>
>
> Got it. And a short explanation here (see [1] for detail):
>
> Here, we move a node from a hlist to another new one, i.e., update
> node->next from the old hlist to the new hlist. For readers traversing
> the old hlist, if we update node->next just when readers move onto the
> moved node, then the readers also move to the new hlist. This is unexpected.
>
> Reader(lookup) Writer(rehash)
> ----------------- ---------------
> 1. rcu_read_lock()
> 2. pos = sk;
> 3. hlist_del_init_rcu(sk, old_slot)
> 4. hlist_add_head_rcu(sk, new_slot)
> 5. pos = pos->next; <=
> 6. rcu_read_unlock()
>
> [1]
> https://lore.kernel.org/all/0fb425e0-5482-4cdf-9dc1-3906751f8f81@linux.alibaba.com/
Thanks. AFAICS the problem that such thing could cause is a lookup
failure for a socket positioned later in the same chain when a previous
entry is moved on a different slot during a concurrent lookup.
I think that could be solved the same way TCP is handling such scenario:
using hlist_null RCU list for the hash4 bucket, checking that a failed
lookup ends in the same bucket where it started and eventually
reiterating from the original bucket.
Have a look at __inet_lookup_established() for a more descriptive
reference, especially:
https://elixir.bootlin.com/linux/v6.12-rc4/source/net/ipv4/inet_hashtables.c#L528
>>> +
>>> + spin_lock_bh(&nhslot4->lock);
>>> + hlist_add_head_rcu(&udp_sk(sk)->udp_lrpa_node, &nhslot4->head);
>>> + nhslot4->count++;
>>> + spin_unlock_bh(&nhslot4->lock);
>>> + }
>>> +}
>>> +
>>> +static void udp4_unhash4(struct udp_table *udptable, struct sock *sk)
>>> +{
>>> + struct udp_hslot *hslot2, *hslot4;
>>> +
>>> + if (udp_hashed4(sk)) {
>>> + hslot2 = udp_hashslot2(udptable, udp_sk(sk)->udp_portaddr_hash);
>>> + hslot4 = udp_hashslot4(udptable, udp_sk(sk)->udp_lrpa_hash);
>>> +
>>> + spin_lock(&hslot4->lock);
>>> + hlist_del_init_rcu(&udp_sk(sk)->udp_lrpa_node);
>>> + hslot4->count--;
>>> + spin_unlock(&hslot4->lock);
>>> +
>>> + spin_lock(&hslot2->lock);
>>> + udp_hash4_dec(hslot2);
>>> + spin_unlock(&hslot2->lock);
>>> + }
>>> +}
>>> +
>>> +/* call with sock lock */
>>> +static void udp4_hash4(struct sock *sk)
>>> +{
>>> + struct udp_hslot *hslot, *hslot2, *hslot4;
>>> + struct net *net = sock_net(sk);
>>> + struct udp_table *udptable;
>>> + unsigned int hash;
>>> +
>>> + if (sk_unhashed(sk) || inet_sk(sk)->inet_rcv_saddr == htonl(INADDR_ANY))
>>> + return;
>>> +
>>> + hash = udp_ehashfn(net, inet_sk(sk)->inet_rcv_saddr, inet_sk(sk)->inet_num,
>>> + inet_sk(sk)->inet_daddr, inet_sk(sk)->inet_dport);
>>> +
>>> + udptable = net->ipv4.udp_table;
>>> + if (udp_hashed4(sk)) {
>>> + udp4_rehash4(udptable, sk, hash);
>>
>> It's unclear to me how we can enter this branch. Also it's unclear why
>> here you don't need to call udp_hash4_inc()udp_hash4_dec, too. Why such
>> accounting can't be placed in udp4_rehash4()?
>>
>
> It's possible that a connected udp socket _re-connect_ to another remote
> address. Then, because the local address is not changed, hash2 and its
> hash4_cnt keep unchanged. But rehash4 need to be done.
> I'll also add a comment here.
Right, UDP socket could actually connect() successfully twice in a row
without a disconnect in between...
I almost missed the point that the ipv6 implementation is planned to
land afterwards.
I'm sorry, but I think that would be problematic - i.e. if ipv4 support
will land in 6.13, but ipv6 will not make it - due to time constraints -
we will have (at least a release with inconsistent behavior between ipv4
and ipv6. I think it will be better bundle such changes together.
Thanks,
Paolo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 net-next 3/3] ipv4/udp: Add 4-tuple hash for connected socket
2024-10-25 9:02 ` Paolo Abeni
@ 2024-10-26 1:39 ` Philo Lu
0 siblings, 0 replies; 11+ messages in thread
From: Philo Lu @ 2024-10-26 1:39 UTC (permalink / raw)
To: Paolo Abeni, netdev
Cc: willemdebruijn.kernel, davem, edumazet, kuba, dsahern,
antony.antony, steffen.klassert, linux-kernel, dust.li, jakub,
fred.cc, yubing.qiuyubing
On 2024/10/25 17:02, Paolo Abeni wrote:
> On 10/25/24 05:50, Philo Lu wrote:
>> On 2024/10/24 23:01, Paolo Abeni wrote:
>>> On 10/18/24 13:45, Philo Lu wrote:
>>> [...]
>>>> +/* In hash4, rehash can also happen in connect(), where hash4_cnt keeps unchanged. */
>>>> +static void udp4_rehash4(struct udp_table *udptable, struct sock *sk, u16 newhash4)
>>>> +{
>>>> + struct udp_hslot *hslot4, *nhslot4;
>>>> +
>>>> + hslot4 = udp_hashslot4(udptable, udp_sk(sk)->udp_lrpa_hash);
>>>> + nhslot4 = udp_hashslot4(udptable, newhash4);
>>>> + udp_sk(sk)->udp_lrpa_hash = newhash4;
>>>> +
>>>> + if (hslot4 != nhslot4) {
>>>> + spin_lock_bh(&hslot4->lock);
>>>> + hlist_del_init_rcu(&udp_sk(sk)->udp_lrpa_node);
>>>> + hslot4->count--;
>>>> + spin_unlock_bh(&hslot4->lock);
>>>> +
>>>> + synchronize_rcu();
>>>
>>> This deserve a comment explaining why it's needed. I had to dig in past
>>> revision to understand it.
>>>
>>
>> Got it. And a short explanation here (see [1] for detail):
>>
>> Here, we move a node from a hlist to another new one, i.e., update
>> node->next from the old hlist to the new hlist. For readers traversing
>> the old hlist, if we update node->next just when readers move onto the
>> moved node, then the readers also move to the new hlist. This is unexpected.
>>
>> Reader(lookup) Writer(rehash)
>> ----------------- ---------------
>> 1. rcu_read_lock()
>> 2. pos = sk;
>> 3. hlist_del_init_rcu(sk, old_slot)
>> 4. hlist_add_head_rcu(sk, new_slot)
>> 5. pos = pos->next; <=
>> 6. rcu_read_unlock()
>>
>> [1]
>> https://lore.kernel.org/all/0fb425e0-5482-4cdf-9dc1-3906751f8f81@linux.alibaba.com/
>
> Thanks. AFAICS the problem that such thing could cause is a lookup
> failure for a socket positioned later in the same chain when a previous
> entry is moved on a different slot during a concurrent lookup.
>
Yes, you're right.
> I think that could be solved the same way TCP is handling such scenario:
> using hlist_null RCU list for the hash4 bucket, checking that a failed
> lookup ends in the same bucket where it started and eventually
> reiterating from the original bucket.
>
> Have a look at __inet_lookup_established() for a more descriptive
> reference, especially:
>
> https://elixir.bootlin.com/linux/v6.12-rc4/source/net/ipv4/inet_hashtables.c#L528
>
Thank you! I'll try it in the next version.
>>>> +
...
>>>> +
>>>> +/* call with sock lock */
>>>> +static void udp4_hash4(struct sock *sk)
>>>> +{
>>>> + struct udp_hslot *hslot, *hslot2, *hslot4;
>>>> + struct net *net = sock_net(sk);
>>>> + struct udp_table *udptable;
>>>> + unsigned int hash;
>>>> +
>>>> + if (sk_unhashed(sk) || inet_sk(sk)->inet_rcv_saddr == htonl(INADDR_ANY))
>>>> + return;
>>>> +
>>>> + hash = udp_ehashfn(net, inet_sk(sk)->inet_rcv_saddr, inet_sk(sk)->inet_num,
>>>> + inet_sk(sk)->inet_daddr, inet_sk(sk)->inet_dport);
>>>> +
>>>> + udptable = net->ipv4.udp_table;
>>>> + if (udp_hashed4(sk)) {
>>>> + udp4_rehash4(udptable, sk, hash);
>>>
>>> It's unclear to me how we can enter this branch. Also it's unclear why
>>> here you don't need to call udp_hash4_inc()udp_hash4_dec, too. Why such
>>> accounting can't be placed in udp4_rehash4()?
>>>
>>
>> It's possible that a connected udp socket _re-connect_ to another remote
>> address. Then, because the local address is not changed, hash2 and its
>> hash4_cnt keep unchanged. But rehash4 need to be done.
>> I'll also add a comment here.
>
> Right, UDP socket could actually connect() successfully twice in a row
> without a disconnect in between...
>
> I almost missed the point that the ipv6 implementation is planned to
> land afterwards.
>
> I'm sorry, but I think that would be problematic - i.e. if ipv4 support
> will land in 6.13, but ipv6 will not make it - due to time constraints -
> we will have (at least a release with inconsistent behavior between ipv4
> and ipv6. I think it will be better bundle such changes together.
>
No problem. I can add ipv6 support in the next version too.
Thanks.
--
Philo
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2024-10-26 1:45 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-18 11:45 [PATCH v5 net-next 0/3] udp: Add 4-tuple hash for connected sockets Philo Lu
2024-10-18 11:45 ` [PATCH v5 net-next 1/3] net/udp: Add a new struct for hash2 slot Philo Lu
2024-10-24 14:31 ` Paolo Abeni
2024-10-24 14:38 ` Paolo Abeni
2024-10-18 11:45 ` [PATCH v5 net-next 2/3] net/udp: Add 4-tuple hash list basis Philo Lu
2024-10-18 11:45 ` [PATCH v5 net-next 3/3] ipv4/udp: Add 4-tuple hash for connected socket Philo Lu
2024-10-24 15:01 ` Paolo Abeni
2024-10-24 15:04 ` Paolo Abeni
2024-10-25 3:50 ` Philo Lu
2024-10-25 9:02 ` Paolo Abeni
2024-10-26 1:39 ` Philo Lu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).