netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH net-2.6.26] fib_trie: RCU optimizations
       [not found] <20080321075521.49347370@extreme>
@ 2008-03-21 16:01 ` Paul E. McKenney
  2008-03-21 17:25   ` Eric Dumazet
  2008-03-21 22:50 ` David Miller
  1 sibling, 1 reply; 6+ messages in thread
From: Paul E. McKenney @ 2008-03-21 16:01 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: David Miller, netdev

On Fri, Mar 21, 2008 at 07:55:21AM -0700, Stephen Hemminger wrote:
> Small performance improvements.
> 
> Eliminate unneeded barrier on deletion. The first pointer to update
> the head of the list is ordered by the second call to rcu_assign_pointer.
> See hlist_add_after_rcu or comparision.
> 
> Move rcu_derference to the loop check (like hlist_for_each_rcu), and
> add a prefetch.

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Justification below.

> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
> 
> --- a/net/ipv4/route.c	2008-03-19 08:45:32.000000000 -0700
> +++ b/net/ipv4/route.c	2008-03-19 08:54:57.000000000 -0700
> @@ -977,8 +977,8 @@ restart:
>  			 * must be visible to another weakly ordered CPU before
>  			 * the insertion at the start of the hash chain.
>  			 */
> -			rcu_assign_pointer(rth->u.dst.rt_next,
> -					   rt_hash_table[hash].chain);
> +			rth->u.dst.rt_next = rt_hash_table[hash].chain;
> +

This is OK because it is finalizing a deletion.  If this were instead
an insertion, this would of course be grossly illegal and dangerous.

>  			/*
>  			 * Since lookup is lockfree, the update writes
>  			 * must be ordered for consistency on SMP.
> @@ -2076,8 +2076,9 @@ int ip_route_input(struct sk_buff *skb, 
>  	hash = rt_hash(daddr, saddr, iif);
> 
>  	rcu_read_lock();
> -	for (rth = rcu_dereference(rt_hash_table[hash].chain); rth;
> -	     rth = rcu_dereference(rth->u.dst.rt_next)) {
> +	for (rth = rt_hash_table[hash].chain; rcu_dereference(rth);
> +	     rth = rth->u.dst.rt_next) {
> +		prefetch(rth->u.dst.rt_next);
>  		if (rth->fl.fl4_dst == daddr &&
>  		    rth->fl.fl4_src == saddr &&
>  		    rth->fl.iif == iif &&

Works, though I would guess that increasingly aggressive compiler
optimization will eventually force us to change the list.h macros
to look like what you had to begin with...  Sigh!!!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-2.6.26] fib_trie: RCU optimizations
  2008-03-21 16:01 ` [PATCH net-2.6.26] fib_trie: RCU optimizations Paul E. McKenney
@ 2008-03-21 17:25   ` Eric Dumazet
  2008-03-21 17:31     ` Stephen Hemminger
  0 siblings, 1 reply; 6+ messages in thread
From: Eric Dumazet @ 2008-03-21 17:25 UTC (permalink / raw)
  To: paulmck; +Cc: Stephen Hemminger, David Miller, netdev

Paul E. McKenney a écrit :
> On Fri, Mar 21, 2008 at 07:55:21AM -0700, Stephen Hemminger wrote:
>   
>> Small performance improvements.
>>
>> Eliminate unneeded barrier on deletion. The first pointer to update
>> the head of the list is ordered by the second call to rcu_assign_pointer.
>> See hlist_add_after_rcu or comparision.
>>
>> Move rcu_derference to the loop check (like hlist_for_each_rcu), and
>> add a prefetch.
>>     
>
> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>
> Justification below.
>
>   
>> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
>>
>> --- a/net/ipv4/route.c	2008-03-19 08:45:32.000000000 -0700
>> +++ b/net/ipv4/route.c	2008-03-19 08:54:57.000000000 -0700
>> @@ -977,8 +977,8 @@ restart:
>>  			 * must be visible to another weakly ordered CPU before
>>  			 * the insertion at the start of the hash chain.
>>  			 */
>> -			rcu_assign_pointer(rth->u.dst.rt_next,
>> -					   rt_hash_table[hash].chain);
>> +			rth->u.dst.rt_next = rt_hash_table[hash].chain;
>> +
>>     
>
> This is OK because it is finalizing a deletion.  If this were instead
> an insertion, this would of course be grossly illegal and dangerous.
>
>   
>>  			/*
>>  			 * Since lookup is lockfree, the update writes
>>  			 * must be ordered for consistency on SMP.
>> @@ -2076,8 +2076,9 @@ int ip_route_input(struct sk_buff *skb, 
>>  	hash = rt_hash(daddr, saddr, iif);
>>
>>  	rcu_read_lock();
>> -	for (rth = rcu_dereference(rt_hash_table[hash].chain); rth;
>> -	     rth = rcu_dereference(rth->u.dst.rt_next)) {
>> +	for (rth = rt_hash_table[hash].chain; rcu_dereference(rth);
>> +	     rth = rth->u.dst.rt_next) {
>> +		prefetch(rth->u.dst.rt_next);
>>  		if (rth->fl.fl4_dst == daddr &&
>>  		    rth->fl.fl4_src == saddr &&
>>  		    rth->fl.iif == iif &&
>>     
>
> Works, though I would guess that increasingly aggressive compiler
> optimization will eventually force us to change the list.h macros
> to look like what you had to begin with...  Sigh!!!
>
>   

Hum... I missed the original patch , but this prefetch() is wrong.

On lookups, we dont want to prefetch the begining of "struct rtable" 
entries.

We were very carefull in the past
( 
http://git2.kernel.org/?p=linux/kernel/git/davem/net-2.6.26.git;a=commit;h=1e19e02ca0c5e33ea73a25127dbe6c3b8fcaac4b 

[NET]: Reorder fields of struct dst_entry )
 to place the "next pointer" at the end of "struct dst" so that lookups 
only bring one cache line per entry.

Thank you





^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-2.6.26] fib_trie: RCU optimizations
  2008-03-21 17:25   ` Eric Dumazet
@ 2008-03-21 17:31     ` Stephen Hemminger
  2008-03-21 17:44       ` Eric Dumazet
  0 siblings, 1 reply; 6+ messages in thread
From: Stephen Hemminger @ 2008-03-21 17:31 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: paulmck, David Miller, netdev

On Fri, 21 Mar 2008 18:25:04 +0100
Eric Dumazet <dada1@cosmosbay.com> wrote:

> Paul E. McKenney a écrit :
> > On Fri, Mar 21, 2008 at 07:55:21AM -0700, Stephen Hemminger wrote:
> >   
> >> Small performance improvements.
> >>
> >> Eliminate unneeded barrier on deletion. The first pointer to update
> >> the head of the list is ordered by the second call to rcu_assign_pointer.
> >> See hlist_add_after_rcu or comparision.
> >>
> >> Move rcu_derference to the loop check (like hlist_for_each_rcu), and
> >> add a prefetch.
> >>     
> >
> > Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >
> > Justification below.
> >
> >   
> >> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
> >>
> >> --- a/net/ipv4/route.c	2008-03-19 08:45:32.000000000 -0700
> >> +++ b/net/ipv4/route.c	2008-03-19 08:54:57.000000000 -0700
> >> @@ -977,8 +977,8 @@ restart:
> >>  			 * must be visible to another weakly ordered CPU before
> >>  			 * the insertion at the start of the hash chain.
> >>  			 */
> >> -			rcu_assign_pointer(rth->u.dst.rt_next,
> >> -					   rt_hash_table[hash].chain);
> >> +			rth->u.dst.rt_next = rt_hash_table[hash].chain;
> >> +
> >>     
> >
> > This is OK because it is finalizing a deletion.  If this were instead
> > an insertion, this would of course be grossly illegal and dangerous.
> >
> >   
> >>  			/*
> >>  			 * Since lookup is lockfree, the update writes
> >>  			 * must be ordered for consistency on SMP.
> >> @@ -2076,8 +2076,9 @@ int ip_route_input(struct sk_buff *skb, 
> >>  	hash = rt_hash(daddr, saddr, iif);
> >>
> >>  	rcu_read_lock();
> >> -	for (rth = rcu_dereference(rt_hash_table[hash].chain); rth;
> >> -	     rth = rcu_dereference(rth->u.dst.rt_next)) {
> >> +	for (rth = rt_hash_table[hash].chain; rcu_dereference(rth);
> >> +	     rth = rth->u.dst.rt_next) {
> >> +		prefetch(rth->u.dst.rt_next);
> >>  		if (rth->fl.fl4_dst == daddr &&
> >>  		    rth->fl.fl4_src == saddr &&
> >>  		    rth->fl.iif == iif &&
> >>     
> >
> > Works, though I would guess that increasingly aggressive compiler
> > optimization will eventually force us to change the list.h macros
> > to look like what you had to begin with...  Sigh!!!
> >
> >   
> 
> Hum... I missed the original patch , but this prefetch() is wrong.
> 
> On lookups, we dont want to prefetch the begining of "struct rtable" 
> entries.

That makes sense when hash is perfect, but under DoS scenario
the hash table will not match exactly, and the next pointer will
be needed.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-2.6.26] fib_trie: RCU optimizations
  2008-03-21 17:31     ` Stephen Hemminger
@ 2008-03-21 17:44       ` Eric Dumazet
  2008-03-21 22:49         ` David Miller
  0 siblings, 1 reply; 6+ messages in thread
From: Eric Dumazet @ 2008-03-21 17:44 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: paulmck, David Miller, netdev

Stephen Hemminger a écrit :
> On Fri, 21 Mar 2008 18:25:04 +0100
> Eric Dumazet <dada1@cosmosbay.com> wrote:
>
>   
>> Paul E. McKenney a écrit :
>>     
>>> On Fri, Mar 21, 2008 at 07:55:21AM -0700, Stephen Hemminger wrote:
>>>   
>>>       
>>>> Small performance improvements.
>>>>
>>>> Eliminate unneeded barrier on deletion. The first pointer to update
>>>> the head of the list is ordered by the second call to rcu_assign_pointer.
>>>> See hlist_add_after_rcu or comparision.
>>>>
>>>> Move rcu_derference to the loop check (like hlist_for_each_rcu), and
>>>> add a prefetch.
>>>>     
>>>>         
>>> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>>>
>>> Justification below.
>>>
>>>   
>>>       
>>>> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
>>>>
>>>> --- a/net/ipv4/route.c	2008-03-19 08:45:32.000000000 -0700
>>>> +++ b/net/ipv4/route.c	2008-03-19 08:54:57.000000000 -0700
>>>> @@ -977,8 +977,8 @@ restart:
>>>>  			 * must be visible to another weakly ordered CPU before
>>>>  			 * the insertion at the start of the hash chain.
>>>>  			 */
>>>> -			rcu_assign_pointer(rth->u.dst.rt_next,
>>>> -					   rt_hash_table[hash].chain);
>>>> +			rth->u.dst.rt_next = rt_hash_table[hash].chain;
>>>> +
>>>>     
>>>>         
>>> This is OK because it is finalizing a deletion.  If this were instead
>>> an insertion, this would of course be grossly illegal and dangerous.
>>>
>>>   
>>>       
>>>>  			/*
>>>>  			 * Since lookup is lockfree, the update writes
>>>>  			 * must be ordered for consistency on SMP.
>>>> @@ -2076,8 +2076,9 @@ int ip_route_input(struct sk_buff *skb, 
>>>>  	hash = rt_hash(daddr, saddr, iif);
>>>>
>>>>  	rcu_read_lock();
>>>> -	for (rth = rcu_dereference(rt_hash_table[hash].chain); rth;
>>>> -	     rth = rcu_dereference(rth->u.dst.rt_next)) {
>>>> +	for (rth = rt_hash_table[hash].chain; rcu_dereference(rth);
>>>> +	     rth = rth->u.dst.rt_next) {
>>>> +		prefetch(rth->u.dst.rt_next);
>>>>  		if (rth->fl.fl4_dst == daddr &&
>>>>  		    rth->fl.fl4_src == saddr &&
>>>>  		    rth->fl.iif == iif &&
>>>>     
>>>>         
>>> Works, though I would guess that increasingly aggressive compiler
>>> optimization will eventually force us to change the list.h macros
>>> to look like what you had to begin with...  Sigh!!!
>>>
>>>   
>>>       
>> Hum... I missed the original patch , but this prefetch() is wrong.
>>
>> On lookups, we dont want to prefetch the begining of "struct rtable" 
>> entries.
>>     
>
> That makes sense when hash is perfect, but under DoS scenario
> the hash table will not match exactly, and the next pointer will
> be needed.
>
>   

Hum... your prefetch () is usefull *only* if hash is perfect.

My point is : I care about DoS scenario :)

struct something {
                        char pad[128];  /* */
                        struct something *next;
                        int    key1;
                        int    key2:
};

struct something *lookup(int key1, int key2)
{
struct something *candidate, *next;
...
while (not found) {
    next = candidate->next:
    prefetch(next);    /* this is not usefull for lookup phase, since it 
brings next->pad[0..XX] */
    if (key1 == candidate->key1 && ...) { ... }
 ....
}




you really need something like    prefetch(&next->next);

But I already tested this in the past in this function and got no 
improvement at all.

Loop is so small that prefetches hints are throwed away by CPU, or the 
cost to setup the prefetch(register + offset) is too expensive...





^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-2.6.26] fib_trie: RCU optimizations
  2008-03-21 17:44       ` Eric Dumazet
@ 2008-03-21 22:49         ` David Miller
  0 siblings, 0 replies; 6+ messages in thread
From: David Miller @ 2008-03-21 22:49 UTC (permalink / raw)
  To: dada1; +Cc: shemminger, paulmck, netdev

From: Eric Dumazet <dada1@cosmosbay.com>
Date: Fri, 21 Mar 2008 18:44:33 +0100

> Loop is so small that prefetches hints are throwed away by CPU, or the 
> cost to setup the prefetch(register + offset) is too expensive...

That's been my experience as well.

For this reason I consider the unconditional prefetches in the
linux/list.h list iteration macros troublesome.

Most loops are small so by default those macros should not prefetch,
and when you know you have a large enough loop (and thus enough work
to make the prefetch actually matter) you use a specific specially
named variant of the list traversal macro.

Oh yes, and you have to provide performance improvement proof before
submitting a patch that uses the prefetch-using list macro variants.
:-)


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH net-2.6.26] fib_trie: RCU optimizations
       [not found] <20080321075521.49347370@extreme>
  2008-03-21 16:01 ` [PATCH net-2.6.26] fib_trie: RCU optimizations Paul E. McKenney
@ 2008-03-21 22:50 ` David Miller
  1 sibling, 0 replies; 6+ messages in thread
From: David Miller @ 2008-03-21 22:50 UTC (permalink / raw)
  To: shemminger; +Cc: paulmck, netdev

From: Stephen Hemminger <shemminger@vyatta.com>
Date: Fri, 21 Mar 2008 07:55:21 -0700

> Small performance improvements.
> 
> Eliminate unneeded barrier on deletion. The first pointer to update
> the head of the list is ordered by the second call to rcu_assign_pointer.
> See hlist_add_after_rcu or comparision.
> 
> Move rcu_derference to the loop check (like hlist_for_each_rcu), and
> add a prefetch.
> 
> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>

Please resubmit without the prefetch unless you can prove
that it noticably improves performance.

It'd be nice to split up these two changes logically anyways
even if the prefetch is warranted, so they can be analyzed
independantly.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2008-03-21 22:50 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20080321075521.49347370@extreme>
2008-03-21 16:01 ` [PATCH net-2.6.26] fib_trie: RCU optimizations Paul E. McKenney
2008-03-21 17:25   ` Eric Dumazet
2008-03-21 17:31     ` Stephen Hemminger
2008-03-21 17:44       ` Eric Dumazet
2008-03-21 22:49         ` David Miller
2008-03-21 22:50 ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).