netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v2 0/3] udp: reduce cache pressure
@ 2017-06-06 14:23 Paolo Abeni
  2017-06-06 14:23 ` [PATCH net-next v2 1/3] net: factor out a helper to decrement the skb refcount Paolo Abeni
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Paolo Abeni @ 2017-06-06 14:23 UTC (permalink / raw)
  To: netdev; +Cc: David S. Miller, Eric Dumazet

In the most common use case, many skb fields are not used by recvmsg(), and
the few ones actually accessed lays on cold cachelines, which leads to several
cache miss per packet.

This patch series attempts to reduce such misses with different strategies:
* caching the interesting fields in the scratched space
* avoid accessing at all uninteresting fields
* prefetching

Tested using the udp_sink program by Jesper[1] as the receiver, an h/w l4 rx
hash on the ingress nic, so that the number of ingress nic rx queues hit by the
udp traffic could be controlled via ethtool -L.

The udp_sink program was bound to the first idle cpu, to get more
stable numbers.

On a single numa node receiver:

nic rx queues           vanilla                 patched kernel      delta
1                       1850 kpps               1850 kpps           0%
2                       2370 kpps               2700 kpps           13.9%
16                      2000 kpps               2220 kpps           11%

[1] https://github.com/netoptimizer/network-testing/blob/master/src/udp_sink.c

v1 -> v2:
  - replaced secpath_reset() with skb_release_head_state()
  - changed udp_dev_scratch fields types to u{32,16} variant,
    replaced bitfield with bool

Paolo Abeni (3):
  net: factor out a helper to decrement the skb refcount
  udp: avoid a cache miss on dequeue
  udp: try to avoid 2 cache miss on dequeue

 include/linux/skbuff.h |  15 +++++++
 net/core/datagram.c    |   4 +-
 net/core/skbuff.c      |  38 ++++++++++------
 net/ipv4/udp.c         | 120 ++++++++++++++++++++++++++++++++++++++++++++-----
 4 files changed, 148 insertions(+), 29 deletions(-)

-- 
2.9.4

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH net-next v2 1/3] net: factor out a helper to decrement the skb refcount
  2017-06-06 14:23 [PATCH net-next v2 0/3] udp: reduce cache pressure Paolo Abeni
@ 2017-06-06 14:23 ` Paolo Abeni
  2017-06-06 14:23 ` [PATCH net-next v2 2/3] udp: avoid a cache miss on dequeue Paolo Abeni
  2017-06-06 14:23 ` [PATCH net-next v2 3/3] udp: try to avoid 2 " Paolo Abeni
  2 siblings, 0 replies; 10+ messages in thread
From: Paolo Abeni @ 2017-06-06 14:23 UTC (permalink / raw)
  To: netdev; +Cc: David S. Miller, Eric Dumazet

The same code is replicated in 3 different places; move it to a
common helper.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Eric Dumazet <edumazet@google.com>
---
 include/linux/skbuff.h | 13 +++++++++++++
 net/core/datagram.c    |  4 +---
 net/core/skbuff.c      | 14 ++++----------
 3 files changed, 18 insertions(+), 13 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 45a59c1..3c25fca 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -867,6 +867,19 @@ static inline unsigned int skb_napi_id(const struct sk_buff *skb)
 #endif
 }
 
+/* decrement the reference count and return true if we can free the skb */
+static inline bool skb_unref(struct sk_buff *skb)
+{
+	if (unlikely(!skb))
+		return false;
+	if (likely(atomic_read(&skb->users) == 1))
+		smp_rmb();
+	else if (likely(!atomic_dec_and_test(&skb->users)))
+		return false;
+
+	return true;
+}
+
 void kfree_skb(struct sk_buff *skb);
 void kfree_skb_list(struct sk_buff *segs);
 void skb_tx_error(struct sk_buff *skb);
diff --git a/net/core/datagram.c b/net/core/datagram.c
index bc46118..e5311a7 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -330,9 +330,7 @@ void __skb_free_datagram_locked(struct sock *sk, struct sk_buff *skb, int len)
 {
 	bool slow;
 
-	if (likely(atomic_read(&skb->users) == 1))
-		smp_rmb();
-	else if (likely(!atomic_dec_and_test(&skb->users))) {
+	if (!skb_unref(skb)) {
 		sk_peek_offset_bwd(sk, len);
 		return;
 	}
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 780b7c1..c81b828 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -694,12 +694,9 @@ EXPORT_SYMBOL(__kfree_skb);
  */
 void kfree_skb(struct sk_buff *skb)
 {
-	if (unlikely(!skb))
-		return;
-	if (likely(atomic_read(&skb->users) == 1))
-		smp_rmb();
-	else if (likely(!atomic_dec_and_test(&skb->users)))
+	if (!skb_unref(skb))
 		return;
+
 	trace_kfree_skb(skb, __builtin_return_address(0));
 	__kfree_skb(skb);
 }
@@ -746,12 +743,9 @@ EXPORT_SYMBOL(skb_tx_error);
  */
 void consume_skb(struct sk_buff *skb)
 {
-	if (unlikely(!skb))
-		return;
-	if (likely(atomic_read(&skb->users) == 1))
-		smp_rmb();
-	else if (likely(!atomic_dec_and_test(&skb->users)))
+	if (!skb_unref(skb))
 		return;
+
 	trace_consume_skb(skb);
 	__kfree_skb(skb);
 }
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next v2 2/3] udp: avoid a cache miss on dequeue
  2017-06-06 14:23 [PATCH net-next v2 0/3] udp: reduce cache pressure Paolo Abeni
  2017-06-06 14:23 ` [PATCH net-next v2 1/3] net: factor out a helper to decrement the skb refcount Paolo Abeni
@ 2017-06-06 14:23 ` Paolo Abeni
  2017-06-06 20:53   ` Eric Dumazet
  2017-06-06 14:23 ` [PATCH net-next v2 3/3] udp: try to avoid 2 " Paolo Abeni
  2 siblings, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2017-06-06 14:23 UTC (permalink / raw)
  To: netdev; +Cc: David S. Miller, Eric Dumazet

Since UDP no more uses sk->destructor, we can clear completely
the skb head state before enqueuing. Amend and use
skb_release_head_state() for that.

All head states share a single cacheline, which is not
normally used/accesses on dequeue. We can avoid entirely accessing
such cacheline implementing and using in the UDP code a specialized
skb free helper which ignores the skb head state.

This saves a cacheline miss at skb deallocation time.

v1 -> v2:
  replaced secpath_reset() with skb_release_head_state()

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 include/linux/skbuff.h |  2 ++
 net/core/skbuff.c      | 24 ++++++++++++++++++++----
 net/ipv4/udp.c         |  6 +++++-
 3 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 3c25fca..ddfb964 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -880,10 +880,12 @@ static inline bool skb_unref(struct sk_buff *skb)
 	return true;
 }
 
+void skb_release_head_state(struct sk_buff *skb);
 void kfree_skb(struct sk_buff *skb);
 void kfree_skb_list(struct sk_buff *segs);
 void skb_tx_error(struct sk_buff *skb);
 void consume_skb(struct sk_buff *skb);
+void consume_stateless_skb(struct sk_buff *skb);
 void  __kfree_skb(struct sk_buff *skb);
 extern struct kmem_cache *skbuff_head_cache;
 
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index c81b828..786cb42 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -643,12 +643,10 @@ static void kfree_skbmem(struct sk_buff *skb)
 	kmem_cache_free(skbuff_fclone_cache, fclones);
 }
 
-static void skb_release_head_state(struct sk_buff *skb)
+void skb_release_head_state(struct sk_buff *skb)
 {
 	skb_dst_drop(skb);
-#ifdef CONFIG_XFRM
-	secpath_put(skb->sp);
-#endif
+	secpath_reset(skb);
 	if (skb->destructor) {
 		WARN_ON(in_irq());
 		skb->destructor(skb);
@@ -751,6 +749,24 @@ void consume_skb(struct sk_buff *skb)
 }
 EXPORT_SYMBOL(consume_skb);
 
+/**
+ *	consume_stateless_skb - free an skbuff, assuming it is stateless
+ *	@skb: buffer to free
+ *
+ *	Works like consume_skb(), but this variant assumes that all the head
+ *	states have been already dropped.
+ */
+void consume_stateless_skb(struct sk_buff *skb)
+{
+	if (!skb_unref(skb))
+		return;
+
+	trace_consume_skb(skb);
+	if (likely(skb->head))
+		skb_release_data(skb);
+	kfree_skbmem(skb);
+}
+
 void __kfree_skb_flush(void)
 {
 	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index fdcb743..d8b265f 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1359,7 +1359,8 @@ void skb_consume_udp(struct sock *sk, struct sk_buff *skb, int len)
 		sk_peek_offset_bwd(sk, len);
 		unlock_sock_fast(sk, slow);
 	}
-	consume_skb(skb);
+
+	consume_stateless_skb(skb);
 }
 EXPORT_SYMBOL_GPL(skb_consume_udp);
 
@@ -1739,6 +1740,9 @@ static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 		sk_mark_napi_id_once(sk, skb);
 	}
 
+	/* clear all pending head states while they are hot in the cache */
+	skb_release_head_state(skb);
+
 	rc = __udp_enqueue_schedule_skb(sk, skb);
 	if (rc < 0) {
 		int is_udplite = IS_UDPLITE(sk);
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next v2 3/3] udp: try to avoid 2 cache miss on dequeue
  2017-06-06 14:23 [PATCH net-next v2 0/3] udp: reduce cache pressure Paolo Abeni
  2017-06-06 14:23 ` [PATCH net-next v2 1/3] net: factor out a helper to decrement the skb refcount Paolo Abeni
  2017-06-06 14:23 ` [PATCH net-next v2 2/3] udp: avoid a cache miss on dequeue Paolo Abeni
@ 2017-06-06 14:23 ` Paolo Abeni
  2017-06-06 20:58   ` Eric Dumazet
  2017-06-07  7:56   ` Paolo Abeni
  2 siblings, 2 replies; 10+ messages in thread
From: Paolo Abeni @ 2017-06-06 14:23 UTC (permalink / raw)
  To: netdev; +Cc: David S. Miller, Eric Dumazet

when udp_recvmsg() is executed, on x86_64 and other archs, most skb
fields are on cold cachelines.
If the skb are linear and the kernel don't need to compute the udp
csum, only a handful of skb fields are required by udp_recvmsg().
Since we already use skb->dev_scratch to cache hot data, and
there are 32 bits unused on 64 bit archs, use such field to cache
as much data as we can, and try to prefetch on dequeue the relevant
fields that are left out.

This can save up to 2 cache miss per packet.

v1 -> v2:
  - changed udp_dev_scratch fields types to u{32,16} variant,
    replaced bitfield with bool

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
 net/ipv4/udp.c | 114 +++++++++++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 103 insertions(+), 11 deletions(-)

diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index d8b265f..2bc638c 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1163,6 +1163,83 @@ int udp_sendpage(struct sock *sk, struct page *page, int offset,
 	return ret;
 }
 
+/* Copy as much information as possible into skb->dev_scratch to avoid
+ * possibly multiple cache miss on dequeue();
+ */
+#if BITS_PER_LONG == 64
+
+/* we can store multiple info here: truesize, len and the bit needed to
+ * compute skb_csum_unnecessary will be on cold cache lines at recvmsg
+ * time.
+ * skb->len can be stored on 16 bits since the udp header has been already
+ * validated and pulled.
+ */
+struct udp_dev_scratch {
+	u32 truesize;
+	u16 len;
+	bool is_linear;
+	bool csum_unnecessary;
+};
+
+static void udp_set_dev_scratch(struct sk_buff *skb)
+{
+	struct udp_dev_scratch *scratch;
+
+	BUILD_BUG_ON(sizeof(struct udp_dev_scratch) > sizeof(long));
+	scratch = (struct udp_dev_scratch *)&skb->dev_scratch;
+	scratch->truesize = skb->truesize;
+	scratch->len = skb->len;
+	scratch->csum_unnecessary = !!skb_csum_unnecessary(skb);
+	scratch->is_linear = !skb_is_nonlinear(skb);
+}
+
+static int udp_skb_truesize(struct sk_buff *skb)
+{
+	return ((struct udp_dev_scratch *)&skb->dev_scratch)->truesize;
+}
+
+static unsigned int udp_skb_len(struct sk_buff *skb)
+{
+	return ((struct udp_dev_scratch *)&skb->dev_scratch)->len;
+}
+
+static bool udp_skb_csum_unnecessary(struct sk_buff *skb)
+{
+	return ((struct udp_dev_scratch *)&skb->dev_scratch)->csum_unnecessary;
+}
+
+static bool udp_skb_is_linear(struct sk_buff *skb)
+{
+	return ((struct udp_dev_scratch *)&skb->dev_scratch)->is_linear;
+}
+
+#else
+static void udp_set_dev_scratch(struct sk_buff *skb)
+{
+	skb->dev_scratch = skb->truesize;
+}
+
+static int udp_skb_truesize(struct sk_buff *skb)
+{
+	return skb->dev_scratch;
+}
+
+static unsigned int udp_skb_len(struct sk_buff *skb)
+{
+	return skb->len;
+}
+
+static bool udp_skb_csum_unnecessary(struct sk_buff *skb)
+{
+	return skb_csum_unnecessary(skb);
+}
+
+static bool udp_skb_is_linear(struct sk_buff *skb)
+{
+	return !skb_is_nonlinear(skb);
+}
+#endif
+
 /* fully reclaim rmem/fwd memory allocated for skb */
 static void udp_rmem_release(struct sock *sk, int size, int partial,
 			     bool rx_queue_lock_held)
@@ -1213,14 +1290,16 @@ static void udp_rmem_release(struct sock *sk, int size, int partial,
  */
 void udp_skb_destructor(struct sock *sk, struct sk_buff *skb)
 {
-	udp_rmem_release(sk, skb->dev_scratch, 1, false);
+	prefetch(&skb->data);
+	udp_rmem_release(sk, udp_skb_truesize(skb), 1, false);
 }
 EXPORT_SYMBOL(udp_skb_destructor);
 
 /* as above, but the caller held the rx queue lock, too */
 static void udp_skb_dtor_locked(struct sock *sk, struct sk_buff *skb)
 {
-	udp_rmem_release(sk, skb->dev_scratch, 1, true);
+	prefetch(&skb->data);
+	udp_rmem_release(sk, udp_skb_truesize(skb), 1, true);
 }
 
 /* Idea of busylocks is to let producers grab an extra spinlock
@@ -1274,10 +1353,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
 		busy = busylock_acquire(sk);
 	}
 	size = skb->truesize;
-	/* Copy skb->truesize into skb->dev_scratch to avoid a cache line miss
-	 * in udp_skb_destructor()
-	 */
-	skb->dev_scratch = size;
+	udp_set_dev_scratch(skb);
 
 	/* we drop only if the receive buf is full and the receive
 	 * queue contains some other skb
@@ -1515,6 +1591,18 @@ struct sk_buff *__skb_recv_udp(struct sock *sk, unsigned int flags,
 }
 EXPORT_SYMBOL_GPL(__skb_recv_udp);
 
+static int copy_linear_skb(struct sk_buff *skb, int len, int off,
+			   struct iov_iter *to)
+{
+	int n, copy = len - off;
+
+	n = copy_to_iter(skb->data + off, copy, to);
+	if (n == copy)
+		return 0;
+
+	return -EFAULT;
+}
+
 /*
  * 	This should be easy, if there is something there we
  * 	return it, otherwise we block.
@@ -1541,7 +1629,7 @@ int udp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int noblock,
 	if (!skb)
 		return err;
 
-	ulen = skb->len;
+	ulen = udp_skb_len(skb);
 	copied = len;
 	if (copied > ulen - off)
 		copied = ulen - off;
@@ -1556,14 +1644,18 @@ int udp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int noblock,
 
 	if (copied < ulen || peeking ||
 	    (is_udplite && UDP_SKB_CB(skb)->partial_cov)) {
-		checksum_valid = !udp_lib_checksum_complete(skb);
+		checksum_valid = udp_skb_csum_unnecessary(skb) ||
+				!__udp_lib_checksum_complete(skb);
 		if (!checksum_valid)
 			goto csum_copy_err;
 	}
 
-	if (checksum_valid || skb_csum_unnecessary(skb))
-		err = skb_copy_datagram_msg(skb, off, msg, copied);
-	else {
+	if (checksum_valid || udp_skb_csum_unnecessary(skb)) {
+		if (udp_skb_is_linear(skb))
+			err = copy_linear_skb(skb, copied, off, &msg->msg_iter);
+		else
+			err = skb_copy_datagram_msg(skb, off, msg, copied);
+	} else {
 		err = skb_copy_and_csum_datagram_msg(skb, off, msg);
 
 		if (err == -EINVAL)
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v2 2/3] udp: avoid a cache miss on dequeue
  2017-06-06 14:23 ` [PATCH net-next v2 2/3] udp: avoid a cache miss on dequeue Paolo Abeni
@ 2017-06-06 20:53   ` Eric Dumazet
  0 siblings, 0 replies; 10+ messages in thread
From: Eric Dumazet @ 2017-06-06 20:53 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: netdev, David S. Miller, Eric Dumazet

On Tue, 2017-06-06 at 16:23 +0200, Paolo Abeni wrote:
> Since UDP no more uses sk->destructor, we can clear completely
> the skb head state before enqueuing. Amend and use
> skb_release_head_state() for that.
> 
> All head states share a single cacheline, which is not
> normally used/accesses on dequeue. We can avoid entirely accessing
> such cacheline implementing and using in the UDP code a specialized
> skb free helper which ignores the skb head state.
> 
> This saves a cacheline miss at skb deallocation time.

Acked-by: Eric Dumazet <edumazet@google.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v2 3/3] udp: try to avoid 2 cache miss on dequeue
  2017-06-06 14:23 ` [PATCH net-next v2 3/3] udp: try to avoid 2 " Paolo Abeni
@ 2017-06-06 20:58   ` Eric Dumazet
  2017-06-07  7:56   ` Paolo Abeni
  1 sibling, 0 replies; 10+ messages in thread
From: Eric Dumazet @ 2017-06-06 20:58 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: netdev, David S. Miller, Eric Dumazet

On Tue, 2017-06-06 at 16:23 +0200, Paolo Abeni wrote:
> when udp_recvmsg() is executed, on x86_64 and other archs, most skb
> fields are on cold cachelines.
> If the skb are linear and the kernel don't need to compute the udp
> csum, only a handful of skb fields are required by udp_recvmsg().
> Since we already use skb->dev_scratch to cache hot data, and
> there are 32 bits unused on 64 bit archs, use such field to cache
> as much data as we can, and try to prefetch on dequeue the relevant
> fields that are left out.

Acked-by: Eric Dumazet <edumazet@google.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v2 3/3] udp: try to avoid 2 cache miss on dequeue
  2017-06-06 14:23 ` [PATCH net-next v2 3/3] udp: try to avoid 2 " Paolo Abeni
  2017-06-06 20:58   ` Eric Dumazet
@ 2017-06-07  7:56   ` Paolo Abeni
  2017-06-07 14:10     ` David Miller
  1 sibling, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2017-06-07  7:56 UTC (permalink / raw)
  To: netdev, David S. Miller; +Cc: Eric Dumazet

Hi David,

On Tue, 2017-06-06 at 16:23 +0200, Paolo Abeni wrote:
> when udp_recvmsg() is executed, on x86_64 and other archs, most skb
> fields are on cold cachelines.
> If the skb are linear and the kernel don't need to compute the udp
> csum, only a handful of skb fields are required by udp_recvmsg().
> Since we already use skb->dev_scratch to cache hot data, and
> there are 32 bits unused on 64 bit archs, use such field to cache
> as much data as we can, and try to prefetch on dequeue the relevant
> fields that are left out.
> 
> This can save up to 2 cache miss per packet.
> 
> v1 -> v2:
>   - changed udp_dev_scratch fields types to u{32,16} variant,
>     replaced bitfield with bool
> 
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>

Can you please keep on-hold this series a little time? the lkp-robot
just reported a performance regression on v1 which I have still to
investigate. I can't look at it really soon, but I expect the same
should apply to v2.

It sounds quite weird to me, since the bisected patch touches the UDP
code only and the regression is on apachebench.

Thank you,

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v2 3/3] udp: try to avoid 2 cache miss on dequeue
  2017-06-07  7:56   ` Paolo Abeni
@ 2017-06-07 14:10     ` David Miller
  2017-06-09 15:44       ` Paolo Abeni
  0 siblings, 1 reply; 10+ messages in thread
From: David Miller @ 2017-06-07 14:10 UTC (permalink / raw)
  To: pabeni; +Cc: netdev, edumazet

From: Paolo Abeni <pabeni@redhat.com>
Date: Wed, 07 Jun 2017 09:56:45 +0200

> Hi David,
> 
> On Tue, 2017-06-06 at 16:23 +0200, Paolo Abeni wrote:
>> when udp_recvmsg() is executed, on x86_64 and other archs, most skb
>> fields are on cold cachelines.
>> If the skb are linear and the kernel don't need to compute the udp
>> csum, only a handful of skb fields are required by udp_recvmsg().
>> Since we already use skb->dev_scratch to cache hot data, and
>> there are 32 bits unused on 64 bit archs, use such field to cache
>> as much data as we can, and try to prefetch on dequeue the relevant
>> fields that are left out.
>> 
>> This can save up to 2 cache miss per packet.
>> 
>> v1 -> v2:
>>   - changed udp_dev_scratch fields types to u{32,16} variant,
>>     replaced bitfield with bool
>> 
>> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> 
> Can you please keep on-hold this series a little time? the lkp-robot
> just reported a performance regression on v1 which I have still to
> investigate. I can't look at it really soon, but I expect the same
> should apply to v2.
> 
> It sounds quite weird to me, since the bisected patch touches the UDP
> code only and the regression is on apachebench.

Hmmm, DNS lookups?

Thanks for looking into this.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v2 3/3] udp: try to avoid 2 cache miss on dequeue
  2017-06-07 14:10     ` David Miller
@ 2017-06-09 15:44       ` Paolo Abeni
  2017-06-09 16:33         ` David Miller
  0 siblings, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2017-06-09 15:44 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, edumazet

Hi,

On Wed, 2017-06-07 at 10:10 -0400, David Miller wrote:
> From: Paolo Abeni <pabeni@redhat.com>
> Date: Wed, 07 Jun 2017 09:56:45 +0200
> 
> > Hi David,
> > 
> > On Tue, 2017-06-06 at 16:23 +0200, Paolo Abeni wrote:
> >> when udp_recvmsg() is executed, on x86_64 and other archs, most skb
> >> fields are on cold cachelines.
> >> If the skb are linear and the kernel don't need to compute the udp
> >> csum, only a handful of skb fields are required by udp_recvmsg().
> >> Since we already use skb->dev_scratch to cache hot data, and
> >> there are 32 bits unused on 64 bit archs, use such field to cache
> >> as much data as we can, and try to prefetch on dequeue the relevant
> >> fields that are left out.
> >> 
> >> This can save up to 2 cache miss per packet.
> >> 
> >> v1 -> v2:
> >>   - changed udp_dev_scratch fields types to u{32,16} variant,
> >>     replaced bitfield with bool
> >> 
> >> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> > 
> > Can you please keep on-hold this series a little time? the lkp-robot
> > just reported a performance regression on v1 which I have still to
> > investigate. I can't look at it really soon, but I expect the same
> > should apply to v2.
> > 
> > It sounds quite weird to me, since the bisected patch touches the UDP
> > code only and the regression is on apachebench.
> 
> Hmmm, DNS lookups?
> 
> Thanks for looking into this.

I spent a little time trying to reproduce the regression. There are not
 DNS requests during the test, because it's done against the loopback
address (verified with perf probe on UDP code).

I collected several samples for both the patched and vanilla kernels,
and I measured a lot of variance (while using the same kernel) - well
above 21% - and a similar results distribution when comparing vanilla
to patched kernel. 

I notified the lkp ML of the above, and I think this is actually a
test-suite artifact.

I'll re-submit v3 unchanged, if there are no objections.

Cheers,

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v2 3/3] udp: try to avoid 2 cache miss on dequeue
  2017-06-09 15:44       ` Paolo Abeni
@ 2017-06-09 16:33         ` David Miller
  0 siblings, 0 replies; 10+ messages in thread
From: David Miller @ 2017-06-09 16:33 UTC (permalink / raw)
  To: pabeni; +Cc: netdev, edumazet

From: Paolo Abeni <pabeni@redhat.com>
Date: Fri, 09 Jun 2017 17:44:29 +0200

> I'll re-submit v3 unchanged, if there are no objections.

No objections from me.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-06-09 16:33 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-06-06 14:23 [PATCH net-next v2 0/3] udp: reduce cache pressure Paolo Abeni
2017-06-06 14:23 ` [PATCH net-next v2 1/3] net: factor out a helper to decrement the skb refcount Paolo Abeni
2017-06-06 14:23 ` [PATCH net-next v2 2/3] udp: avoid a cache miss on dequeue Paolo Abeni
2017-06-06 20:53   ` Eric Dumazet
2017-06-06 14:23 ` [PATCH net-next v2 3/3] udp: try to avoid 2 " Paolo Abeni
2017-06-06 20:58   ` Eric Dumazet
2017-06-07  7:56   ` Paolo Abeni
2017-06-07 14:10     ` David Miller
2017-06-09 15:44       ` Paolo Abeni
2017-06-09 16:33         ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).