* [PATCH net-next] udp: handle gro_receive only when necessary
@ 2017-12-18 4:11 zhangliping
2017-12-18 10:26 ` Paolo Abeni
0 siblings, 1 reply; 6+ messages in thread
From: zhangliping @ 2017-12-18 4:11 UTC (permalink / raw)
To: davem, netdev; +Cc: zhangliping
From: zhangliping <zhangliping02@baidu.com>
Under our udp pressure performance test, after gro is disabled, rx rate
will be improved from ~2500kpps to ~2800kpps. We can find some difference
from perf report:
1. gro is enabled:
24.23% [kernel] [k] udp4_lib_lookup2
5.42% [kernel] [k] __memcpy
3.87% [kernel] [k] fib_table_lookup
3.76% [kernel] [k] __netif_receive_skb_core
3.68% [kernel] [k] ip_rcv
2. gro is disabled:
9.66% [kernel] [k] udp4_lib_lookup2
9.47% [kernel] [k] __memcpy
4.75% [kernel] [k] fib_table_lookup
4.71% [kernel] [k] __netif_receive_skb_core
3.90% [kernel] [k] virtnet_poll
So if there's no udp tunnel(such as vxlan) configured, we can skip
the udp gro processing.
Signed-off-by: zhangliping <zhangliping02@baidu.com>
---
include/net/udp.h | 2 ++
net/ipv4/udp_offload.c | 7 +++++++
net/ipv4/udp_tunnel.c | 11 ++++++++++-
3 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/include/net/udp.h b/include/net/udp.h
index 6c759c8594e2..c503f8b06845 100644
--- a/include/net/udp.h
+++ b/include/net/udp.h
@@ -188,6 +188,8 @@ static inline struct udphdr *udp_gro_udphdr(struct sk_buff *skb)
return uh;
}
+extern struct static_key_false udp_gro_needed;
+
/* hash routines shared between UDPv4/6 and UDP-Litev4/6 */
static inline int udp_lib_hash(struct sock *sk)
{
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
index 01801b77bd0d..9cb11a833964 100644
--- a/net/ipv4/udp_offload.c
+++ b/net/ipv4/udp_offload.c
@@ -10,10 +10,14 @@
* UDPv4 GSO support
*/
+#include <linux/static_key.h>
#include <linux/skbuff.h>
#include <net/udp.h>
#include <net/protocol.h>
+DEFINE_STATIC_KEY_FALSE(udp_gro_needed);
+EXPORT_SYMBOL_GPL(udp_gro_needed);
+
static struct sk_buff *__skb_udp_tunnel_segment(struct sk_buff *skb,
netdev_features_t features,
struct sk_buff *(*gso_inner_segment)(struct sk_buff *skb,
@@ -250,6 +254,9 @@ struct sk_buff **udp_gro_receive(struct sk_buff **head, struct sk_buff *skb,
int flush = 1;
struct sock *sk;
+ if (!static_branch_unlikely(&udp_gro_needed))
+ goto out;
+
if (NAPI_GRO_CB(skb)->encap_mark ||
(skb->ip_summed != CHECKSUM_PARTIAL &&
NAPI_GRO_CB(skb)->csum_cnt == 0 &&
diff --git a/net/ipv4/udp_tunnel.c b/net/ipv4/udp_tunnel.c
index 6539ff15e9a3..4a7b3c8223c0 100644
--- a/net/ipv4/udp_tunnel.c
+++ b/net/ipv4/udp_tunnel.c
@@ -1,4 +1,5 @@
#include <linux/module.h>
+#include <linux/static_key.h>
#include <linux/errno.h>
#include <linux/socket.h>
#include <linux/udp.h>
@@ -73,6 +74,9 @@ void setup_udp_tunnel_sock(struct net *net, struct socket *sock,
udp_sk(sk)->gro_complete = cfg->gro_complete;
udp_tunnel_encap_enable(sock);
+
+ if (udp_sk(sk)->gro_receive)
+ static_branch_inc(&udp_gro_needed);
}
EXPORT_SYMBOL_GPL(setup_udp_tunnel_sock);
@@ -185,7 +189,12 @@ EXPORT_SYMBOL_GPL(udp_tunnel_xmit_skb);
void udp_tunnel_sock_release(struct socket *sock)
{
- rcu_assign_sk_user_data(sock->sk, NULL);
+ struct sock *sk = sock->sk;
+
+ if (udp_sk(sk)->gro_receive)
+ static_branch_dec(&udp_gro_needed);
+
+ rcu_assign_sk_user_data(sk, NULL);
kernel_sock_shutdown(sock, SHUT_RDWR);
sock_release(sock);
}
--
2.13.4
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH net-next] udp: handle gro_receive only when necessary
2017-12-18 4:11 [PATCH net-next] udp: handle gro_receive only when necessary zhangliping
@ 2017-12-18 10:26 ` Paolo Abeni
2017-12-18 12:09 ` zhangliping
0 siblings, 1 reply; 6+ messages in thread
From: Paolo Abeni @ 2017-12-18 10:26 UTC (permalink / raw)
To: zhangliping, davem, netdev; +Cc: zhangliping
Hi,
On Mon, 2017-12-18 at 12:11 +0800, zhangliping wrote:
> From: zhangliping <zhangliping02@baidu.com>
>
> Under our udp pressure performance test, after gro is disabled, rx rate
> will be improved from ~2500kpps to ~2800kpps. We can find some difference
> from perf report:
> 1. gro is enabled:
> 24.23% [kernel] [k] udp4_lib_lookup2
> 5.42% [kernel] [k] __memcpy
> 3.87% [kernel] [k] fib_table_lookup
> 3.76% [kernel] [k] __netif_receive_skb_core
> 3.68% [kernel] [k] ip_rcv
>
> 2. gro is disabled:
> 9.66% [kernel] [k] udp4_lib_lookup2
> 9.47% [kernel] [k] __memcpy
> 4.75% [kernel] [k] fib_table_lookup
> 4.71% [kernel] [k] __netif_receive_skb_core
> 3.90% [kernel] [k] virtnet_poll
>
> So if there's no udp tunnel(such as vxlan) configured, we can skip
> the udp gro processing.
I tested something similar some time ago, but I measured a much smaller
gain. Also the topmost perf offenders looks quite different from what I
see here, can you please share more details about the test case?
> Signed-off-by: zhangliping <zhangliping02@baidu.com>
> ---
> include/net/udp.h | 2 ++
> net/ipv4/udp_offload.c | 7 +++++++
> net/ipv4/udp_tunnel.c | 11 ++++++++++-
> 3 files changed, 19 insertions(+), 1 deletion(-)
>
> diff --git a/include/net/udp.h b/include/net/udp.h
> index 6c759c8594e2..c503f8b06845 100644
> --- a/include/net/udp.h
> +++ b/include/net/udp.h
> @@ -188,6 +188,8 @@ static inline struct udphdr *udp_gro_udphdr(struct sk_buff *skb)
> return uh;
> }
>
> +extern struct static_key_false udp_gro_needed;
> +
> /* hash routines shared between UDPv4/6 and UDP-Litev4/6 */
> static inline int udp_lib_hash(struct sock *sk)
> {
> diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
> index 01801b77bd0d..9cb11a833964 100644
> --- a/net/ipv4/udp_offload.c
> +++ b/net/ipv4/udp_offload.c
> @@ -10,10 +10,14 @@
> * UDPv4 GSO support
> */
>
> +#include <linux/static_key.h>
> #include <linux/skbuff.h>
> #include <net/udp.h>
> #include <net/protocol.h>
>
> +DEFINE_STATIC_KEY_FALSE(udp_gro_needed);
> +EXPORT_SYMBOL_GPL(udp_gro_needed);
> +
I think that adding a new static key is not required, as we can
probably reuse 'udp_encap_needed' and 'udpv6_encap_needed'. The latter
choice allows earlier branching (in
udp4_gro_receive()/udp6_gro_receive() instead of udp_gro_receive().
Cheers,
Paolo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next] udp: handle gro_receive only when necessary
2017-12-18 10:26 ` Paolo Abeni
@ 2017-12-18 12:09 ` zhangliping
2017-12-18 14:45 ` Paolo Abeni
0 siblings, 1 reply; 6+ messages in thread
From: zhangliping @ 2017-12-18 12:09 UTC (permalink / raw)
To: Paolo Abeni; +Cc: davem, netdev, zhangliping
Hi,
At 2017-12-18 18:26:28, "Paolo Abeni" <pabeni@redhat.com> wrote:
>Hi,
>
>On Mon, 2017-12-18 at 12:11 +0800, zhangliping wrote:
>> From: zhangliping <zhangliping02@baidu.com>
>>
>> Under our udp pressure performance test, after gro is disabled, rx rate
>> will be improved from ~2500kpps to ~2800kpps. We can find some difference
>> from perf report:
>> 1. gro is enabled:
>> 24.23% [kernel] [k] udp4_lib_lookup2
>> 5.42% [kernel] [k] __memcpy
>> 3.87% [kernel] [k] fib_table_lookup
>> 3.76% [kernel] [k] __netif_receive_skb_core
>> 3.68% [kernel] [k] ip_rcv
>>
>> 2. gro is disabled:
>> 9.66% [kernel] [k] udp4_lib_lookup2
>> 9.47% [kernel] [k] __memcpy
>> 4.75% [kernel] [k] fib_table_lookup
>> 4.71% [kernel] [k] __netif_receive_skb_core
>> 3.90% [kernel] [k] virtnet_poll
>>
>> So if there's no udp tunnel(such as vxlan) configured, we can skip
>> the udp gro processing.
>
>I tested something similar some time ago, but I measured a much smaller
>gain. Also the topmost perf offenders looks quite different from what I
>see here, can you please share more details about the test case?
My test case is very simple, two VMs were connected via ovs + dpdk.
Inside VM, rps is enabled. Then one VM runs "iperf -s -u &", another
VM runs "iperf -c 1.1.1.2 -P 12 -u -b 10Gbps -l 40 -t 36000".
On the iperf server side, use the sar tool to watch the rx rate performance.
>> +DEFINE_STATIC_KEY_FALSE(udp_gro_needed);
>> +EXPORT_SYMBOL_GPL(udp_gro_needed);
>> +
>
>I think that adding a new static key is not required, as we can
>probably reuse 'udp_encap_needed' and 'udpv6_encap_needed'. The latter
>choice allows earlier branching (in
>udp4_gro_receive()/udp6_gro_receive() instead of udp_gro_receive().
Yes, we can reuse udpX_encap_needed, I indeed want to do like this at my
first attempt.
But I find some udp tunnel doesn't support gro receive(such as l2tp,
udp_media). And udpX_encap_needed won't be disabled after it is enabled,
at least for now.
So I finally chose to add a new udp_gro_needed, which seems a little redundant. :(
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next] udp: handle gro_receive only when necessary
2017-12-18 12:09 ` zhangliping
@ 2017-12-18 14:45 ` Paolo Abeni
2017-12-19 11:01 ` zhangliping
0 siblings, 1 reply; 6+ messages in thread
From: Paolo Abeni @ 2017-12-18 14:45 UTC (permalink / raw)
To: zhangliping; +Cc: davem, netdev, zhangliping
On Mon, 2017-12-18 at 20:09 +0800, zhangliping wrote:
> My test case is very simple, two VMs were connected via ovs + dpdk.
> Inside VM, rps is enabled. Then one VM runs "iperf -s -u &", another
> VM runs "iperf -c 1.1.1.2 -P 12 -u -b 10Gbps -l 40 -t 36000".
Understood, thanks. Still the time spent in 'udp4_lib_lookup2' looks
quite different/higher than what I observe in my tests. Are you using
x86_64? if not, do you see many cache misses in udp4_lib_lookup2?
Thanks,
Paolo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next] udp: handle gro_receive only when necessary
2017-12-18 14:45 ` Paolo Abeni
@ 2017-12-19 11:01 ` zhangliping
2017-12-19 11:47 ` Paolo Abeni
0 siblings, 1 reply; 6+ messages in thread
From: zhangliping @ 2017-12-19 11:01 UTC (permalink / raw)
To: Paolo Abeni; +Cc: davem, netdev, zhangliping
Hi,
At 2017-12-18 22:45:30, "Paolo Abeni" <pabeni@redhat.com> wrote:
>Understood, thanks. Still the time spent in 'udp4_lib_lookup2' looks
>quite different/higher than what I observe in my tests. Are you using
>x86_64? if not, do you see many cache misses in udp4_lib_lookup2?
Yes, x86_64. Here is the host's lscpu output info:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 6
CPU socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Stepping: 4
CPU MHz: 2095.074
BogoMIPS: 4196.28
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0-5
NUMA node1 CPU(s): 6-11
Btw, my guest OS is Centos 3.10.0-514.26.2.el7.x86_64, is this kernel
too old to be tested?
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next] udp: handle gro_receive only when necessary
2017-12-19 11:01 ` zhangliping
@ 2017-12-19 11:47 ` Paolo Abeni
0 siblings, 0 replies; 6+ messages in thread
From: Paolo Abeni @ 2017-12-19 11:47 UTC (permalink / raw)
To: zhangliping; +Cc: davem, netdev, zhangliping
On Tue, 2017-12-19 at 19:01 +0800, zhangliping wrote:
> At 2017-12-18 22:45:30, "Paolo Abeni" <pabeni@redhat.com> wrote:
> > Understood, thanks. Still the time spent in 'udp4_lib_lookup2' looks
> > quite different/higher than what I observe in my tests. Are you using
> > x86_64? if not, do you see many cache misses in udp4_lib_lookup2?
>
> Yes, x86_64. Here is the host's lscpu output info:
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Byte Order: Little Endian
> CPU(s): 12
> On-line CPU(s) list: 0-11
> Thread(s) per core: 1
> Core(s) per socket: 6
> CPU socket(s): 2
> NUMA node(s): 2
> Vendor ID: GenuineIntel
> CPU family: 6
> Model: 62
> Stepping: 4
> CPU MHz: 2095.074
> BogoMIPS: 4196.28
> Virtualization: VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 256K
> L3 cache: 15360K
> NUMA node0 CPU(s): 0-5
> NUMA node1 CPU(s): 6-11
>
> Btw, my guest OS is Centos 3.10.0-514.26.2.el7.x86_64, is this kernel
> too old to be tested?
Understood. Yes, such kernel is a bit too old. So the perf trace you
reported refer to the CentOS kernel?
If you try a current vanilla kernel (or an upcoming rhel 7.5, for
shameless self promotion) you should see much better figures (and a
smaller differenct with your patch in)
Cheers,
Paolo
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-12-19 11:47 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-12-18 4:11 [PATCH net-next] udp: handle gro_receive only when necessary zhangliping
2017-12-18 10:26 ` Paolo Abeni
2017-12-18 12:09 ` zhangliping
2017-12-18 14:45 ` Paolo Abeni
2017-12-19 11:01 ` zhangliping
2017-12-19 11:47 ` Paolo Abeni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).