* [PATCH] net: speedup dst_release()
@ 2008-11-14 8:09 Eric Dumazet
2008-11-14 8:54 ` David Miller
0 siblings, 1 reply; 10+ messages in thread
From: Eric Dumazet @ 2008-11-14 8:09 UTC (permalink / raw)
To: David S. Miller; +Cc: Linux Netdev List, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 1620 bytes --]
During tbench/oprofile sessions, I found that dst_release() was in third position.
CPU: Core 2, speed 2999.68 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000
samples % symbol name
483726 9.0185 __copy_user_zeroing_intel
191466 3.5697 __copy_user_intel
185475 3.4580 dst_release
175114 3.2648 ip_queue_xmit
153447 2.8608 tcp_sendmsg
108775 2.0280 tcp_recvmsg
102659 1.9140 sysenter_past_esp
101450 1.8914 tcp_current_mss
95067 1.7724 __copy_from_user_ll
86531 1.6133 tcp_transmit_skb
Of course, all CPUS fight on the dst_entry associated with 127.0.0.1
Instead of first checking the refcount value, then decrement it,
we use atomic_dec_return() to help CPU to make the right memory transaction
(ie getting the cache line in exclusive mode)
dst_release() is now at the fifth position, and tbench a litle bit faster ;)
CPU: Core 2, speed 3000.1 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000
samples % symbol name
647107 8.8072 __copy_user_zeroing_intel
258840 3.5229 ip_queue_xmit
258302 3.5155 __copy_user_intel
209629 2.8531 tcp_sendmsg
165632 2.2543 dst_release
149232 2.0311 tcp_current_mss
147821 2.0119 tcp_recvmsg
137893 1.8767 sysenter_past_esp
127473 1.7349 __copy_from_user_ll
121308 1.6510 ip_finish_output
118510 1.6129 tcp_transmit_skb
109295 1.4875 tcp_v4_rcv
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
[-- Attachment #2: dst_release.patch --]
[-- Type: text/plain, Size: 482 bytes --]
diff --git a/net/core/dst.c b/net/core/dst.c
index 09c1530..07e5ad2 100644
--- a/net/core/dst.c
+++ b/net/core/dst.c
@@ -263,9 +263,11 @@ again:
void dst_release(struct dst_entry *dst)
{
if (dst) {
- WARN_ON(atomic_read(&dst->__refcnt) < 1);
+ int newrefcnt;
+
smp_mb__before_atomic_dec();
- atomic_dec(&dst->__refcnt);
+ newrefcnt = atomic_dec_return(&dst->__refcnt);
+ WARN_ON(newrefcnt < 0);
}
}
EXPORT_SYMBOL(dst_release);
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] net: speedup dst_release()
2008-11-14 8:09 [PATCH] net: speedup dst_release() Eric Dumazet
@ 2008-11-14 8:54 ` David Miller
2008-11-14 9:04 ` Eric Dumazet
0 siblings, 1 reply; 10+ messages in thread
From: David Miller @ 2008-11-14 8:54 UTC (permalink / raw)
To: dada1; +Cc: netdev, shemminger
From: Eric Dumazet <dada1@cosmosbay.com>
Date: Fri, 14 Nov 2008 09:09:31 +0100
> During tbench/oprofile sessions, I found that dst_release() was in third position.
...
> Instead of first checking the refcount value, then decrement it,
> we use atomic_dec_return() to help CPU to make the right memory transaction
> (ie getting the cache line in exclusive mode)
...
> Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
This looks great, applied, thanks Eric.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] net: speedup dst_release()
2008-11-14 8:54 ` David Miller
@ 2008-11-14 9:04 ` Eric Dumazet
2008-11-14 9:36 ` Alexey Dobriyan
0 siblings, 1 reply; 10+ messages in thread
From: Eric Dumazet @ 2008-11-14 9:04 UTC (permalink / raw)
To: David Miller; +Cc: netdev, shemminger, Alexey Dobriyan, Zhang, Yanmin
David Miller a écrit :
> From: Eric Dumazet <dada1@cosmosbay.com>
> Date: Fri, 14 Nov 2008 09:09:31 +0100
>
>> During tbench/oprofile sessions, I found that dst_release() was in third position.
> ...
>> Instead of first checking the refcount value, then decrement it,
>> we use atomic_dec_return() to help CPU to make the right memory transaction
>> (ie getting the cache line in exclusive mode)
> ...
>> Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
>
> This looks great, applied, thanks Eric.
>
Thanks David
I think I understood some regressions here on 32bits
offsetof(struct dst_entry, __refcnt) is 0x7c again !!!
This is really really bad for performance
I believe this comes from a patch from Alexey Dobriyan
(commit def8b4faff5ca349beafbbfeb2c51f3602a6ef3a
net: reduce structures when XFRM=n)
This kills effort from Zhang Yanmin (and me...)
(commit f1dd9c379cac7d5a76259e7dffcd5f8edc697d17
[NET]: Fix tbench regression in 2.6.25-rc1)
Really we must find something so that this damned __refcnt is starting at 0x80
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] net: speedup dst_release()
2008-11-14 9:04 ` Eric Dumazet
@ 2008-11-14 9:36 ` Alexey Dobriyan
2008-11-14 10:47 ` [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes Eric Dumazet
0 siblings, 1 reply; 10+ messages in thread
From: Alexey Dobriyan @ 2008-11-14 9:36 UTC (permalink / raw)
To: Eric Dumazet; +Cc: David Miller, netdev, shemminger, Zhang, Yanmin
On Fri, Nov 14, 2008 at 10:04:24AM +0100, Eric Dumazet wrote:
> David Miller a écrit :
>> From: Eric Dumazet <dada1@cosmosbay.com>
>> Date: Fri, 14 Nov 2008 09:09:31 +0100
>>
>>> During tbench/oprofile sessions, I found that dst_release() was in third position.
>> ...
>>> Instead of first checking the refcount value, then decrement it,
>>> we use atomic_dec_return() to help CPU to make the right memory transaction
>>> (ie getting the cache line in exclusive mode)
>> ...
>>> Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
>>
>> This looks great, applied, thanks Eric.
>>
>
> Thanks David
>
>
> I think I understood some regressions here on 32bits
>
> offsetof(struct dst_entry, __refcnt) is 0x7c again !!!
>
> This is really really bad for performance
>
> I believe this comes from a patch from Alexey Dobriyan
> (commit def8b4faff5ca349beafbbfeb2c51f3602a6ef3a
> net: reduce structures when XFRM=n)
Ick.
> This kills effort from Zhang Yanmin (and me...)
>
> (commit f1dd9c379cac7d5a76259e7dffcd5f8edc697d17
> [NET]: Fix tbench regression in 2.6.25-rc1)
>
>
> Really we must find something so that this damned __refcnt is starting at 0x80
Make it last member?
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes
2008-11-14 9:36 ` Alexey Dobriyan
@ 2008-11-14 10:47 ` Eric Dumazet
2008-11-14 11:35 ` Alexey Dobriyan
2008-11-17 3:46 ` David Miller
0 siblings, 2 replies; 10+ messages in thread
From: Eric Dumazet @ 2008-11-14 10:47 UTC (permalink / raw)
To: David Miller; +Cc: Alexey Dobriyan, netdev, shemminger, Zhang, Yanmin
[-- Attachment #1: Type: text/plain, Size: 2793 bytes --]
Alexey Dobriyan a écrit :
> On Fri, Nov 14, 2008 at 10:04:24AM +0100, Eric Dumazet wrote:
>> David Miller a écrit :
>>> From: Eric Dumazet <dada1@cosmosbay.com>
>>> Date: Fri, 14 Nov 2008 09:09:31 +0100
>>>
>>>> During tbench/oprofile sessions, I found that dst_release() was in third position.
>>> ...
>>>> Instead of first checking the refcount value, then decrement it,
>>>> we use atomic_dec_return() to help CPU to make the right memory transaction
>>>> (ie getting the cache line in exclusive mode)
>>> ...
>>>> Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
>>> This looks great, applied, thanks Eric.
>>>
>> Thanks David
>>
>>
>> I think I understood some regressions here on 32bits
>>
>> offsetof(struct dst_entry, __refcnt) is 0x7c again !!!
>>
>> This is really really bad for performance
>>
>> I believe this comes from a patch from Alexey Dobriyan
>> (commit def8b4faff5ca349beafbbfeb2c51f3602a6ef3a
>> net: reduce structures when XFRM=n)
>
> Ick.
Well, your patch is a good thing, we only need to make adjustments.
>
>> This kills effort from Zhang Yanmin (and me...)
>>
>> (commit f1dd9c379cac7d5a76259e7dffcd5f8edc697d17
>> [NET]: Fix tbench regression in 2.6.25-rc1)
>>
>>
>> Really we must find something so that this damned __refcnt is starting at 0x80
>
> Make it last member?
Yes, it will help tbench, but not machines that stress IP route cache
(dst_use() must dirty the three fields "refcnt, __use , lastuse" )
Also, 'next' pointer should be in the same cache line, to speedup route
cache lookups.
Next problem is that offsets depend on architecture being 32 or 64 bits.
On 64bit, offsetof(struct dst_entry, __refcnt) is 0xb0 : not very good...
[PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes
As found in the past (commit f1dd9c379cac7d5a76259e7dffcd5f8edc697d17
[NET]: Fix tbench regression in 2.6.25-rc1), it is really
important that struct dst_entry refcount is aligned on a cache line.
We cannot use __atribute((aligned)), so manually pad the structure
for 32 and 64 bit arches.
for 32bit : offsetof(truct dst_entry, __refcnt) is 0x80
for 64bit : offsetof(truct dst_entry, __refcnt) is 0xc0
As it is not possible to guess at compile time cache line size,
we use a generic value of 64 bytes, that satisfies many current arches.
(Using 128 bytes alignment on 64bit arches would waste 64 bytes)
Add a BUILD_BUG_ON to catch future updates to "struct dst_entry" dont
break this alignment.
"tbench 8" is 4.4 % faster on a dual quad core (HP BL460c G1), Intel E5450 @3.00GHz
(2350 MB/s instead of 2250 MB/s)
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
---
include/net/dst.h | 19 +++++++++++++++++++
1 files changed, 19 insertions(+)
[-- Attachment #2: dst_align.patch --]
[-- Type: text/plain, Size: 1127 bytes --]
diff --git a/include/net/dst.h b/include/net/dst.h
index 65a60fa..6c77879 100644
--- a/include/net/dst.h
+++ b/include/net/dst.h
@@ -61,6 +61,8 @@ struct dst_entry
struct hh_cache *hh;
#ifdef CONFIG_XFRM
struct xfrm_state *xfrm;
+#else
+ void *__pad1;
#endif
int (*input)(struct sk_buff*);
int (*output)(struct sk_buff*);
@@ -71,8 +73,20 @@ struct dst_entry
#ifdef CONFIG_NET_CLS_ROUTE
__u32 tclassid;
+#else
+ __u32 __pad2;
#endif
+
+ /*
+ * Align __refcnt to a 64 bytes alignment
+ * (L1_CACHE_SIZE would be too much)
+ */
+#ifdef CONFIG_64BIT
+ long __pad_to_align_refcnt[2];
+#else
+ long __pad_to_align_refcnt[1];
+#endif
/*
* __refcnt wants to be on a different cache line from
* input/output/ops or performance tanks badly
@@ -157,6 +171,11 @@ dst_metric_locked(struct dst_entry *dst, int metric)
static inline void dst_hold(struct dst_entry * dst)
{
+ /*
+ * If your kernel compilation stops here, please check
+ * __pad_to_align_refcnt declaration in struct dst_entry
+ */
+ BUILD_BUG_ON(offsetof(struct dst_entry, __refcnt) & 63);
atomic_inc(&dst->__refcnt);
}
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes
2008-11-14 10:47 ` [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes Eric Dumazet
@ 2008-11-14 11:35 ` Alexey Dobriyan
2008-11-14 11:43 ` Eric Dumazet
2008-11-17 3:46 ` David Miller
1 sibling, 1 reply; 10+ messages in thread
From: Alexey Dobriyan @ 2008-11-14 11:35 UTC (permalink / raw)
To: Eric Dumazet; +Cc: David Miller, netdev, shemminger, Zhang, Yanmin
On Fri, Nov 14, 2008 at 11:47:01AM +0100, Eric Dumazet wrote:
> Alexey Dobriyan a écrit :
>> On Fri, Nov 14, 2008 at 10:04:24AM +0100, Eric Dumazet wrote:
>>> David Miller a écrit :
>>>> From: Eric Dumazet <dada1@cosmosbay.com>
>>>> Date: Fri, 14 Nov 2008 09:09:31 +0100
>>>>
>>>>> During tbench/oprofile sessions, I found that dst_release() was in third position.
>>>> ...
>>>>> Instead of first checking the refcount value, then decrement it,
>>>>> we use atomic_dec_return() to help CPU to make the right memory transaction
>>>>> (ie getting the cache line in exclusive mode)
>>>> ...
>>>>> Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
>>>> This looks great, applied, thanks Eric.
>>>>
>>> Thanks David
>>>
>>>
>>> I think I understood some regressions here on 32bits
>>>
>>> offsetof(struct dst_entry, __refcnt) is 0x7c again !!!
>>>
>>> This is really really bad for performance
>>>
>>> I believe this comes from a patch from Alexey Dobriyan
>>> (commit def8b4faff5ca349beafbbfeb2c51f3602a6ef3a
>>> net: reduce structures when XFRM=n)
>>
>> Ick.
>
> Well, your patch is a good thing, we only need to make adjustments.
>
>>
>>> This kills effort from Zhang Yanmin (and me...)
>>>
>>> (commit f1dd9c379cac7d5a76259e7dffcd5f8edc697d17
>>> [NET]: Fix tbench regression in 2.6.25-rc1)
>>>
>>>
>>> Really we must find something so that this damned __refcnt is starting at 0x80
>>
>> Make it last member?
>
> Yes, it will help tbench, but not machines that stress IP route cache
>
> (dst_use() must dirty the three fields "refcnt, __use , lastuse" )
>
> Also, 'next' pointer should be in the same cache line, to speedup route
> cache lookups.
Knowledge taken.
> Next problem is that offsets depend on architecture being 32 or 64 bits.
>
> On 64bit, offsetof(struct dst_entry, __refcnt) is 0xb0 : not very good...
I think all these constraints can be satisfied with clever rearranging of dst_entry.
Let me come up with alternative patch which still reduces dst slab size.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes
2008-11-14 11:35 ` Alexey Dobriyan
@ 2008-11-14 11:43 ` Eric Dumazet
2008-11-14 13:22 ` Alexey Dobriyan
0 siblings, 1 reply; 10+ messages in thread
From: Eric Dumazet @ 2008-11-14 11:43 UTC (permalink / raw)
To: Alexey Dobriyan; +Cc: David Miller, netdev, shemminger, Zhang, Yanmin
Alexey Dobriyan a écrit :
> On Fri, Nov 14, 2008 at 11:47:01AM +0100, Eric Dumazet wrote:
>> Alexey Dobriyan a écrit :
>>> On Fri, Nov 14, 2008 at 10:04:24AM +0100, Eric Dumazet wrote:
>>>> David Miller a écrit :
>>>>> From: Eric Dumazet <dada1@cosmosbay.com>
>>>>> Date: Fri, 14 Nov 2008 09:09:31 +0100
>>>>>
>>>>>> During tbench/oprofile sessions, I found that dst_release() was in third position.
>>>>> ...
>>>>>> Instead of first checking the refcount value, then decrement it,
>>>>>> we use atomic_dec_return() to help CPU to make the right memory transaction
>>>>>> (ie getting the cache line in exclusive mode)
>>>>> ...
>>>>>> Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
>>>>> This looks great, applied, thanks Eric.
>>>>>
>>>> Thanks David
>>>>
>>>>
>>>> I think I understood some regressions here on 32bits
>>>>
>>>> offsetof(struct dst_entry, __refcnt) is 0x7c again !!!
>>>>
>>>> This is really really bad for performance
>>>>
>>>> I believe this comes from a patch from Alexey Dobriyan
>>>> (commit def8b4faff5ca349beafbbfeb2c51f3602a6ef3a
>>>> net: reduce structures when XFRM=n)
>>> Ick.
>> Well, your patch is a good thing, we only need to make adjustments.
>>
>>>> This kills effort from Zhang Yanmin (and me...)
>>>>
>>>> (commit f1dd9c379cac7d5a76259e7dffcd5f8edc697d17
>>>> [NET]: Fix tbench regression in 2.6.25-rc1)
>>>>
>>>>
>>>> Really we must find something so that this damned __refcnt is starting at 0x80
>>> Make it last member?
>> Yes, it will help tbench, but not machines that stress IP route cache
>>
>> (dst_use() must dirty the three fields "refcnt, __use , lastuse" )
>>
>> Also, 'next' pointer should be in the same cache line, to speedup route
>> cache lookups.
>
> Knowledge taken.
>
>> Next problem is that offsets depend on architecture being 32 or 64 bits.
>>
>> On 64bit, offsetof(struct dst_entry, __refcnt) is 0xb0 : not very good...
>
> I think all these constraints can be satisfied with clever rearranging of dst_entry.
> Let me come up with alternative patch which still reduces dst slab size.
You cannot reduce size, and it doesnt matter, since we use dst_entry inside rtable
and rtable is using SLAB_HWCACHE_ALIGN kmem_cachep : we have many bytes available.
After patch on 32 bits
sizeof(struct rtable)=244 (12 bytes left)
Same for other containers.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes
2008-11-14 11:43 ` Eric Dumazet
@ 2008-11-14 13:22 ` Alexey Dobriyan
2008-11-14 13:37 ` Eric Dumazet
0 siblings, 1 reply; 10+ messages in thread
From: Alexey Dobriyan @ 2008-11-14 13:22 UTC (permalink / raw)
To: Eric Dumazet; +Cc: David Miller, netdev, shemminger, Zhang, Yanmin
On Fri, Nov 14, 2008 at 12:43:06PM +0100, Eric Dumazet wrote:
> Alexey Dobriyan a écrit :
>> On Fri, Nov 14, 2008 at 11:47:01AM +0100, Eric Dumazet wrote:
>>> Alexey Dobriyan a écrit :
>>>> On Fri, Nov 14, 2008 at 10:04:24AM +0100, Eric Dumazet wrote:
>>>>> David Miller a écrit :
>>>>>> From: Eric Dumazet <dada1@cosmosbay.com>
>>>>>> Date: Fri, 14 Nov 2008 09:09:31 +0100
>>>>>>
>>>>>>> During tbench/oprofile sessions, I found that dst_release() was in third position.
>>>>>> ...
>>>>>>> Instead of first checking the refcount value, then decrement it,
>>>>>>> we use atomic_dec_return() to help CPU to make the right memory transaction
>>>>>>> (ie getting the cache line in exclusive mode)
>>>>>> ...
>>>>>>> Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
>>>>>> This looks great, applied, thanks Eric.
>>>>>>
>>>>> Thanks David
>>>>>
>>>>>
>>>>> I think I understood some regressions here on 32bits
>>>>>
>>>>> offsetof(struct dst_entry, __refcnt) is 0x7c again !!!
>>>>>
>>>>> This is really really bad for performance
>>>>>
>>>>> I believe this comes from a patch from Alexey Dobriyan
>>>>> (commit def8b4faff5ca349beafbbfeb2c51f3602a6ef3a
>>>>> net: reduce structures when XFRM=n)
>>>> Ick.
>>> Well, your patch is a good thing, we only need to make adjustments.
>>>
>>>>> This kills effort from Zhang Yanmin (and me...)
>>>>>
>>>>> (commit f1dd9c379cac7d5a76259e7dffcd5f8edc697d17
>>>>> [NET]: Fix tbench regression in 2.6.25-rc1)
>>>>>
>>>>>
>>>>> Really we must find something so that this damned __refcnt is starting at 0x80
>>>> Make it last member?
>>> Yes, it will help tbench, but not machines that stress IP route cache
>>>
>>> (dst_use() must dirty the three fields "refcnt, __use , lastuse" )
>>>
>>> Also, 'next' pointer should be in the same cache line, to speedup route
>>> cache lookups.
>>
>> Knowledge taken.
>>
>>> Next problem is that offsets depend on architecture being 32 or 64 bits.
>>>
>>> On 64bit, offsetof(struct dst_entry, __refcnt) is 0xb0 : not very good...
>>
>> I think all these constraints can be satisfied with clever rearranging of dst_entry.
>> Let me come up with alternative patch which still reduces dst slab size.
>
> You cannot reduce size, and it doesnt matter, since we use dst_entry inside rtable
> and rtable is using SLAB_HWCACHE_ALIGN kmem_cachep : we have many bytes available.
>
> After patch on 32 bits
>
> sizeof(struct rtable)=244 (12 bytes left)
>
> Same for other containers.
Hmm, indeed.
I tried moving __refcnt et al to the very beginning, but it seems to make
things worse (on x86_64, almost within statistical error).
And there is no way to use offset_of() inside struct definition. :-(
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes
2008-11-14 13:22 ` Alexey Dobriyan
@ 2008-11-14 13:37 ` Eric Dumazet
0 siblings, 0 replies; 10+ messages in thread
From: Eric Dumazet @ 2008-11-14 13:37 UTC (permalink / raw)
To: Alexey Dobriyan; +Cc: David Miller, netdev, shemminger, Zhang, Yanmin
Alexey Dobriyan a écrit :
> Hmm, indeed.
>
> I tried moving __refcnt et al to the very beginning, but it seems to make
> things worse (on x86_64, almost within statistical error).
>
> And there is no way to use offset_of() inside struct definition. :-(
Yes, it is important that the beginning of structure contain read mostly fields.
refcnt being the most written field (incremented / decremented for each packet),
it is really important to move it outside of the first 128 bytes
(192 bytes on 64 bit arches) of dst_entry
I wonder if some real hot dst_entries could be splitted (one copy for each stream),
to reduce ping-pongs.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes
2008-11-14 10:47 ` [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes Eric Dumazet
2008-11-14 11:35 ` Alexey Dobriyan
@ 2008-11-17 3:46 ` David Miller
1 sibling, 0 replies; 10+ messages in thread
From: David Miller @ 2008-11-17 3:46 UTC (permalink / raw)
To: dada1; +Cc: adobriyan, netdev, shemminger, yanmin_zhang
From: Eric Dumazet <dada1@cosmosbay.com>
Date: Fri, 14 Nov 2008 11:47:01 +0100
> [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes
Applied to net-next-2.6, thanks Eric.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2008-11-17 3:46 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-11-14 8:09 [PATCH] net: speedup dst_release() Eric Dumazet
2008-11-14 8:54 ` David Miller
2008-11-14 9:04 ` Eric Dumazet
2008-11-14 9:36 ` Alexey Dobriyan
2008-11-14 10:47 ` [PATCH] net: make sure struct dst_entry refcount is aligned on 64 bytes Eric Dumazet
2008-11-14 11:35 ` Alexey Dobriyan
2008-11-14 11:43 ` Eric Dumazet
2008-11-14 13:22 ` Alexey Dobriyan
2008-11-14 13:37 ` Eric Dumazet
2008-11-17 3:46 ` David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).