netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function
@ 2008-11-22  7:19 Eric Dumazet
  2008-11-22  7:22 ` Stephen Hemminger
  2008-11-23 23:45 ` David Miller
  0 siblings, 2 replies; 6+ messages in thread
From: Eric Dumazet @ 2008-11-22  7:19 UTC (permalink / raw)
  To: David S. Miller; +Cc: Linux Netdev List

[-- Attachment #1: Type: text/plain, Size: 1455 bytes --]

Hello David, this is a resend of a patch previously sent in a 
"tbench regression ..." thread on lkml

We should also address the problem of skb_pull(skb, ETH_HLEN);
in eth_type_trans() :

Being not inlined, this force eth_type_trans() to be a non
leaf function, that cost precious cpu cycles on many arches.

Thank you

[PATCH] eth: Declare an optimized compare_ether_addr_64bits() function

Linus mentioned we could try to perform long word operations, even
on potentially unaligned addresses, on x86 at least.

I tried this idea and got nice assembly on 32 bits:

158:   33 82 38 01 00 00       xor    0x138(%edx),%eax
15e:   33 8a 34 01 00 00       xor    0x134(%edx),%ecx
164:   c1 e0 10                shl    $0x10,%eax
167:   09 c1                   or     %eax,%ecx
169:   74 0b                   je     176 <eth_type_trans+0x87>

And very nice assembly on 64 bits of course (one xor, one shl)

Nice oprofile improvement in eth_type_trans(), 0.17 % instead of 0.41 %,
expected since we remove 8 instructions on a fast path.

This patch implements a compare_ether_addr_64bits() function,
that handles the case of x86 cpus, but might be used on other arches as well,
if their potential misaligned long word reads are not expensive.

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
---
 include/linux/etherdevice.h |   41 ++++++++++++++++++++++++++++++++++
 net/ethernet/eth.c          |    4 +--
 2 files changed, 43 insertions(+), 2 deletions(-)

[-- Attachment #2: compare_ether_addr_64bits.patch --]
[-- Type: text/plain, Size: 2438 bytes --]

diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
index 25d62e6..ee0df09 100644
--- a/include/linux/etherdevice.h
+++ b/include/linux/etherdevice.h
@@ -136,6 +136,47 @@ static inline unsigned compare_ether_addr(const u8 *addr1, const u8 *addr2)
 	BUILD_BUG_ON(ETH_ALEN != 6);
 	return ((a[0] ^ b[0]) | (a[1] ^ b[1]) | (a[2] ^ b[2])) != 0;
 }
+
+static inline unsigned long zap_last_2bytes(unsigned long value)
+{
+#ifdef __BIG_ENDIAN
+	return value >> 16;
+#else
+	return value << 16;
+#endif
+}
+
+/**
+ * compare_ether_addr_64bits - Compare two Ethernet addresses
+ * @addr1: Pointer to an array of 8 bytes
+ * @addr2: Pointer to an other array of 8 bytes
+ *
+ * Compare two ethernet addresses, returns 0 if equal.
+ * Same result than "memcmp(addr1, addr2, ETH_ALEN)" but without conditional
+ * branches, and possibly long word memory accesses on CPU allowing cheap
+ * unaligned memory reads.
+ * arrays = { byte1, byte2, byte3, byte4, byte6, byte7, pad1, pad2}
+ * 
+ * Please note that alignment of addr1 & addr2 is only guaranted to be 16 bits.
+ */
+
+static inline unsigned compare_ether_addr_64bits(const u8 addr1[6+2],
+						 const u8 addr2[6+2])
+{
+#if defined(CONFIG_X86)
+	unsigned long fold = *(const unsigned long *)addr1 ^
+			     *(const unsigned long *)addr2;
+
+	if (sizeof(fold) == 8)
+		return zap_last_2bytes(fold) != 0;
+
+	fold |= zap_last_2bytes(*(const unsigned long *)(addr1 + 4) ^
+				*(const unsigned long *)(addr2 + 4));
+	return fold != 0;
+#else
+	return compare_ether_addr(addr1, addr2);
+#endif
+}
 #endif	/* __KERNEL__ */
 
 #endif	/* _LINUX_ETHERDEVICE_H */
diff --git a/net/ethernet/eth.c b/net/ethernet/eth.c
index b9d85af..dcfeb9b 100644
--- a/net/ethernet/eth.c
+++ b/net/ethernet/eth.c
@@ -166,7 +166,7 @@ __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev)
 	eth = eth_hdr(skb);
 
 	if (is_multicast_ether_addr(eth->h_dest)) {
-		if (!compare_ether_addr(eth->h_dest, dev->broadcast))
+		if (!compare_ether_addr_64bits(eth->h_dest, dev->broadcast))
 			skb->pkt_type = PACKET_BROADCAST;
 		else
 			skb->pkt_type = PACKET_MULTICAST;
@@ -181,7 +181,7 @@ __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev)
 	 */
 
 	else if (1 /*dev->flags&IFF_PROMISC */ ) {
-		if (unlikely(compare_ether_addr(eth->h_dest, dev->dev_addr)))
+		if (unlikely(compare_ether_addr_64bits(eth->h_dest, dev->dev_addr)))
 			skb->pkt_type = PACKET_OTHERHOST;
 	}
 

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function
  2008-11-22  7:19 [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function Eric Dumazet
@ 2008-11-22  7:22 ` Stephen Hemminger
  2008-11-22  7:30   ` Eric Dumazet
  2008-11-23 23:45 ` David Miller
  1 sibling, 1 reply; 6+ messages in thread
From: Stephen Hemminger @ 2008-11-22  7:22 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David S. Miller, Linux Netdev List

On Sat, 22 Nov 2008 08:19:17 +0100
Eric Dumazet <dada1@cosmosbay.com> wrote:

> Hello David, this is a resend of a patch previously sent in a 
> "tbench regression ..." thread on lkml
> 
> We should also address the problem of skb_pull(skb, ETH_HLEN);
> in eth_type_trans() :
> 
> Being not inlined, this force eth_type_trans() to be a non
> leaf function, that cost precious cpu cycles on many arches.
> 
> Thank you
> 
> [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function
> 
> Linus mentioned we could try to perform long word operations, even
> on potentially unaligned addresses, on x86 at least.
> 
> I tried this idea and got nice assembly on 32 bits:
> 
> 158:   33 82 38 01 00 00       xor    0x138(%edx),%eax
> 15e:   33 8a 34 01 00 00       xor    0x134(%edx),%ecx
> 164:   c1 e0 10                shl    $0x10,%eax
> 167:   09 c1                   or     %eax,%ecx
> 169:   74 0b                   je     176 <eth_type_trans+0x87>
> 
> And very nice assembly on 64 bits of course (one xor, one shl)
> 
> Nice oprofile improvement in eth_type_trans(), 0.17 % instead of 0.41 %,
> expected since we remove 8 instructions on a fast path.
> 
> This patch implements a compare_ether_addr_64bits() function,
> that handles the case of x86 cpus, but might be used on other arches as well,
> if their potential misaligned long word reads are not expensive.
> 

Why invent another function? Why not just have compare_ether_addr() be
as optimized as possible, could even set it up to be overloadable by
asm code.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function
  2008-11-22  7:22 ` Stephen Hemminger
@ 2008-11-22  7:30   ` Eric Dumazet
  0 siblings, 0 replies; 6+ messages in thread
From: Eric Dumazet @ 2008-11-22  7:30 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: David S. Miller, Linux Netdev List

Stephen Hemminger a écrit :
> On Sat, 22 Nov 2008 08:19:17 +0100
> Eric Dumazet <dada1@cosmosbay.com> wrote:
> 
>> Hello David, this is a resend of a patch previously sent in a 
>> "tbench regression ..." thread on lkml
>>
>> We should also address the problem of skb_pull(skb, ETH_HLEN);
>> in eth_type_trans() :
>>
>> Being not inlined, this force eth_type_trans() to be a non
>> leaf function, that cost precious cpu cycles on many arches.
>>
>> Thank you
>>
>> [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function
>>
>> Linus mentioned we could try to perform long word operations, even
>> on potentially unaligned addresses, on x86 at least.
>>
>> I tried this idea and got nice assembly on 32 bits:
>>
>> 158:   33 82 38 01 00 00       xor    0x138(%edx),%eax
>> 15e:   33 8a 34 01 00 00       xor    0x134(%edx),%ecx
>> 164:   c1 e0 10                shl    $0x10,%eax
>> 167:   09 c1                   or     %eax,%ecx
>> 169:   74 0b                   je     176 <eth_type_trans+0x87>
>>
>> And very nice assembly on 64 bits of course (one xor, one shl)
>>
>> Nice oprofile improvement in eth_type_trans(), 0.17 % instead of 0.41 %,
>> expected since we remove 8 instructions on a fast path.
>>
>> This patch implements a compare_ether_addr_64bits() function,
>> that handles the case of x86 cpus, but might be used on other arches as well,
>> if their potential misaligned long word reads are not expensive.
>>
> 
> Why invent another function? Why not just have compare_ether_addr() be
> as optimized as possible, could even set it up to be overloadable by
> asm code.

Because I am not sure we can fetch 8 bytes from addr1 & addr2 from all
call sites. Better be safe, and convert each call sites after an audit.
Then, when fully audited, rename the function ?



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function
  2008-11-22  7:19 [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function Eric Dumazet
  2008-11-22  7:22 ` Stephen Hemminger
@ 2008-11-23 23:45 ` David Miller
  2008-11-24  6:27   ` Eric Dumazet
  1 sibling, 1 reply; 6+ messages in thread
From: David Miller @ 2008-11-23 23:45 UTC (permalink / raw)
  To: dada1; +Cc: netdev

From: Eric Dumazet <dada1@cosmosbay.com>
Date: Sat, 22 Nov 2008 08:19:17 +0100

> This patch implements a compare_ether_addr_64bits() function, that
> handles the case of x86 cpus, but might be used on other arches as
> well, if their potential misaligned long word reads are not
> expensive.

We have a test for this, HAVE_EFFICIENT_UNALIGNED_ACCESS

Please use that instead of CONFIG_X86 and I'll apply this
to net-next-2.6

Thanks!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function
  2008-11-23 23:45 ` David Miller
@ 2008-11-24  6:27   ` Eric Dumazet
  2008-11-24  6:46     ` David Miller
  0 siblings, 1 reply; 6+ messages in thread
From: Eric Dumazet @ 2008-11-24  6:27 UTC (permalink / raw)
  To: David Miller; +Cc: netdev

[-- Attachment #1: Type: text/plain, Size: 1852 bytes --]

David Miller a écrit :
> From: Eric Dumazet <dada1@cosmosbay.com>
> Date: Sat, 22 Nov 2008 08:19:17 +0100
> 
>> This patch implements a compare_ether_addr_64bits() function, that
>> handles the case of x86 cpus, but might be used on other arches as
>> well, if their potential misaligned long word reads are not
>> expensive.
> 
> We have a test for this, HAVE_EFFICIENT_UNALIGNED_ACCESS
> 
> Please use that instead of CONFIG_X86 and I'll apply this
> to net-next-2.6
> 

Excellent !

I missed this cool feature.

Thanks David

[PATCH] eth: Declare an optimized compare_ether_addr_64bits() function

Linus mentioned we could try to perform long word operations, even
on potentially unaligned addresses, on x86 at least. David mentioned
the HAVE_EFFICIENT_UNALIGNED_ACCESS test to handle this on all
arches that have efficient unailgned accesses.

I tried this idea and got nice assembly on 32 bits:

158:   33 82 38 01 00 00       xor    0x138(%edx),%eax
15e:   33 8a 34 01 00 00       xor    0x134(%edx),%ecx
164:   c1 e0 10                shl    $0x10,%eax
167:   09 c1                   or     %eax,%ecx
169:   74 0b                   je     176 <eth_type_trans+0x87>

And very nice assembly on 64 bits of course (one xor, one shl)

Nice oprofile improvement in eth_type_trans(), 0.17 % instead of 0.41 %,
expected since we remove 8 instructions on a fast path.

This patch implements a compare_ether_addr_64bits() function,
that uses the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS ifdef
and get_unaligned() macro to efficiently perform the 6 bytes
comparison on all capable arches.

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
---
 include/linux/etherdevice.h |   44 ++++++++++++++++++++++++++++++++++
 net/ethernet/eth.c          |    6 ++--
 2 files changed, 47 insertions(+), 3 deletions(-)

[-- Attachment #2: ether.patch --]
[-- Type: text/plain, Size: 2838 bytes --]

diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
index 0e5e970..2a3419a 100644
--- a/include/linux/etherdevice.h
+++ b/include/linux/etherdevice.h
@@ -27,6 +27,7 @@
 #include <linux/if_ether.h>
 #include <linux/netdevice.h>
 #include <linux/random.h>
+#include <asm/unaligned.h>
 
 #ifdef __KERNEL__
 extern __be16		eth_type_trans(struct sk_buff *skb, struct net_device *dev);
@@ -140,6 +141,49 @@ static inline unsigned compare_ether_addr(const u8 *addr1, const u8 *addr2)
 	BUILD_BUG_ON(ETH_ALEN != 6);
 	return ((a[0] ^ b[0]) | (a[1] ^ b[1]) | (a[2] ^ b[2])) != 0;
 }
+
+static inline unsigned long zap_last_2bytes(unsigned long value)
+{
+#ifdef __BIG_ENDIAN
+	return value >> 16;
+#else
+	return value << 16;
+#endif
+}
+
+/**
+ * compare_ether_addr_64bits - Compare two Ethernet addresses
+ * @addr1: Pointer to an array of 8 bytes
+ * @addr2: Pointer to an other array of 8 bytes
+ *
+ * Compare two ethernet addresses, returns 0 if equal.
+ * Same result than "memcmp(addr1, addr2, ETH_ALEN)" but without conditional
+ * branches, and possibly long word memory accesses on CPU allowing cheap
+ * unaligned memory reads.
+ * arrays = { byte1, byte2, byte3, byte4, byte6, byte7, pad1, pad2}
+ * 
+ * Please note that alignment of addr1 & addr2 is only guaranted to be 16 bits.
+ */
+
+static inline unsigned compare_ether_addr_64bits(const u8 addr1[6+2],
+						 const u8 addr2[6+2])
+{
+#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+	unsigned long fold = get_unaligned((const unsigned long *)addr1) ^
+			     get_unaligned((const unsigned long *)addr2);
+
+	if (sizeof(fold) == 8)
+		return zap_last_2bytes(fold) != 0;
+
+	fold |= zap_last_2bytes(
+			get_unaligned((const unsigned long *)(addr1 + 4)) ^
+			get_unaligned((const unsigned long *)(addr2 + 4))
+			);
+	return fold != 0;
+#else
+	return compare_ether_addr(addr1, addr2);
+#endif
+}
 #endif	/* __KERNEL__ */
 
 #endif	/* _LINUX_ETHERDEVICE_H */
diff --git a/net/ethernet/eth.c b/net/ethernet/eth.c
index a87a171..280352a 100644
--- a/net/ethernet/eth.c
+++ b/net/ethernet/eth.c
@@ -165,8 +165,8 @@ __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev)
 	skb_pull(skb, ETH_HLEN);
 	eth = eth_hdr(skb);
 
-	if (is_multicast_ether_addr(eth->h_dest)) {
-		if (!compare_ether_addr(eth->h_dest, dev->broadcast))
+	if (unlikely(is_multicast_ether_addr(eth->h_dest))) {
+		if (!compare_ether_addr_64bits(eth->h_dest, dev->broadcast))
 			skb->pkt_type = PACKET_BROADCAST;
 		else
 			skb->pkt_type = PACKET_MULTICAST;
@@ -181,7 +181,7 @@ __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev)
 	 */
 
 	else if (1 /*dev->flags&IFF_PROMISC */ ) {
-		if (unlikely(compare_ether_addr(eth->h_dest, dev->dev_addr)))
+		if (unlikely(compare_ether_addr_64bits(eth->h_dest, dev->dev_addr)))
 			skb->pkt_type = PACKET_OTHERHOST;
 	}
 

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function
  2008-11-24  6:27   ` Eric Dumazet
@ 2008-11-24  6:46     ` David Miller
  0 siblings, 0 replies; 6+ messages in thread
From: David Miller @ 2008-11-24  6:46 UTC (permalink / raw)
  To: dada1; +Cc: netdev

From: Eric Dumazet <dada1@cosmosbay.com>
Date: Mon, 24 Nov 2008 07:27:46 +0100

> David Miller a écrit :
> > From: Eric Dumazet <dada1@cosmosbay.com>
> > Date: Sat, 22 Nov 2008 08:19:17 +0100
> > 
> >> This patch implements a compare_ether_addr_64bits() function, that
> >> handles the case of x86 cpus, but might be used on other arches as
> >> well, if their potential misaligned long word reads are not
> >> expensive.
> > We have a test for this, HAVE_EFFICIENT_UNALIGNED_ACCESS
> > Please use that instead of CONFIG_X86 and I'll apply this
> > to net-next-2.6
> > 
> 
> Excellent !
> 
> I missed this cool feature.

Sorry, one last nit.

If the platform defines HAVE_EFFICIENT_UNALIGNED_ACCESS, there
is not point in going and using the get_unaligned() interfaces.
They are by definition a NOP in such cases.

So I toss them while applying this second version of your
patch, thanks!

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2008-11-24  6:46 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-11-22  7:19 [PATCH] eth: Declare an optimized compare_ether_addr_64bits() function Eric Dumazet
2008-11-22  7:22 ` Stephen Hemminger
2008-11-22  7:30   ` Eric Dumazet
2008-11-23 23:45 ` David Miller
2008-11-24  6:27   ` Eric Dumazet
2008-11-24  6:46     ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).