netfilter-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Any Performance benchmark on a Million conntracks
@ 2010-05-20 12:51 Anand Raj Manickam
  2010-05-20 13:04 ` Eric Dumazet
  2010-05-21 14:05 ` Simon Lodal
  0 siblings, 2 replies; 9+ messages in thread
From: Anand Raj Manickam @ 2010-05-20 12:51 UTC (permalink / raw)
  To: netfilter-devel

Hi,
Is there any performance bench mark on conntrack response to 1 million
conntrack entries in the conntrack table.
Since conntrack uses Hashing to lookup the entries i had some doubts
on the scalability. Can someone shed some light please?
Thanks,
Anand

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Any Performance benchmark on a Million conntracks
  2010-05-20 12:51 Any Performance benchmark on a Million conntracks Anand Raj Manickam
@ 2010-05-20 13:04 ` Eric Dumazet
  2010-05-20 14:03   ` Patrick McHardy
  2010-05-21 14:05 ` Simon Lodal
  1 sibling, 1 reply; 9+ messages in thread
From: Eric Dumazet @ 2010-05-20 13:04 UTC (permalink / raw)
  To: Anand Raj Manickam; +Cc: netfilter-devel

Le jeudi 20 mai 2010 à 18:21 +0530, Anand Raj Manickam a écrit :
> Hi,
> Is there any performance bench mark on conntrack response to 1 million
> conntrack entries in the conntrack table.
> Since conntrack uses Hashing to lookup the entries i had some doubts
> on the scalability. Can someone shed some light please?

Question is not about number of conntrack entries in hash table, but
number of inserts and deletes per second.

For persistent connections, if you use a hash table of one million
slots, performance will be very good, since the chain length is small.
Its scalable because each cpu can access conntrack table without locks,
in parallel.

The real problem comes from serialization of inserts/deletes on a
central lock. Even with few entries (less than 50.000), this can be a
problem because its not scalable.



--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Any Performance benchmark on a Million conntracks
  2010-05-20 13:04 ` Eric Dumazet
@ 2010-05-20 14:03   ` Patrick McHardy
  2010-05-20 17:13     ` Anand Raj Manickam
  0 siblings, 1 reply; 9+ messages in thread
From: Patrick McHardy @ 2010-05-20 14:03 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Anand Raj Manickam, netfilter-devel

Eric Dumazet wrote:
> Le jeudi 20 mai 2010 à 18:21 +0530, Anand Raj Manickam a écrit :
>> Hi,
>> Is there any performance bench mark on conntrack response to 1 million
>> conntrack entries in the conntrack table.
>> Since conntrack uses Hashing to lookup the entries i had some doubts
>> on the scalability. Can someone shed some light please?
> 
> Question is not about number of conntrack entries in hash table, but
> number of inserts and deletes per second.
> 
> For persistent connections, if you use a hash table of one million
> slots, performance will be very good, since the chain length is small.
> Its scalable because each cpu can access conntrack table without locks,
> in parallel.

Actually the recommended hash table size is twice the number of
expected connections since each conntrack is hashed twice :)
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Any Performance benchmark on a Million conntracks
  2010-05-20 14:03   ` Patrick McHardy
@ 2010-05-20 17:13     ` Anand Raj Manickam
  2010-05-20 17:44       ` Eric Dumazet
  0 siblings, 1 reply; 9+ messages in thread
From: Anand Raj Manickam @ 2010-05-20 17:13 UTC (permalink / raw)
  To: Patrick McHardy; +Cc: Eric Dumazet, netfilter-devel

On Thu, May 20, 2010 at 7:33 PM, Patrick McHardy <kaber@trash.net> wrote:
> Eric Dumazet wrote:
>> Le jeudi 20 mai 2010 à 18:21 +0530, Anand Raj Manickam a écrit :
>>> Hi,
>>> Is there any performance bench mark on conntrack response to 1 million
>>> conntrack entries in the conntrack table.
>>> Since conntrack uses Hashing to lookup the entries i had some doubts
>>> on the scalability. Can someone shed some light please?
>>
>> Question is not about number of conntrack entries in hash table, but
>> number of inserts and deletes per second.
>>
>> For persistent connections, if you use a hash table of one million
>> slots, performance will be very good, since the chain length is small.
>> Its scalable because each cpu can access conntrack table without locks,
>> in parallel.
>
My understanding is that , the chances of persistent connections on
Networks using internet is less.

Suppose , if there are around 50,000 connections adds and 50,000
connection deletes  on 1 million conncurrent conntrack entry table we
have a scalabilty problem ?

The reason why i m posting this question is on the ablity of hash
tables on 1 million entries vs rb trees handling 1 million entries .

> Actually the recommended hash table size is twice the number of
> expected connections since each conntrack is hashed twice :)
>

So , if i m expecting (i m just expecting connections from users NOT
on HELPERS/EXPECTATION) 1 million connections do i need to set the
conntrack table to 2 million ?

How much memory do we need to maintain 1 million connections ?

The typical iptables/netfilter  say about 32k connections for 512 MB
RAM / 64k connections for greater than 1 GB.
As per  my understanding each conntrack entry is about 300 odd bytes
,assuming 310 bytes per conntrack entry , (310 * 1000000) roughly
around 300 MB
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Any Performance benchmark on a Million conntracks
  2010-05-20 17:13     ` Anand Raj Manickam
@ 2010-05-20 17:44       ` Eric Dumazet
  2010-05-20 23:43         ` Changli Gao
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Dumazet @ 2010-05-20 17:44 UTC (permalink / raw)
  To: Anand Raj Manickam; +Cc: Patrick McHardy, netfilter-devel

Le jeudi 20 mai 2010 à 22:43 +0530, Anand Raj Manickam a écrit :

> Suppose , if there are around 50,000 connections adds and 50,000
> connection deletes  on 1 million conncurrent conntrack entry table we
> have a scalabilty problem ?
> 

Yes, unless you use one cpu.

> The reason why i m posting this question is on the ablity of hash
> tables on 1 million entries vs rb trees handling 1 million entries .

Do you have an idea of the depth of a rb tree with 1 million entries ?

Well sized hash table is about 25 x faster than a rb tree in this case
for pure lookups, and inserts and deletes in hash table are about 100x
faster in this case.

hash table : one or two cache misses per lookup or inserts/deletes

rbtree with one million entries : about 25 caches misses per lookup, and
maybe 100 cache misses per insert/delete.




--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Any Performance benchmark on a Million conntracks
  2010-05-20 17:44       ` Eric Dumazet
@ 2010-05-20 23:43         ` Changli Gao
  2010-05-21  2:34           ` Eric Dumazet
  0 siblings, 1 reply; 9+ messages in thread
From: Changli Gao @ 2010-05-20 23:43 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Anand Raj Manickam, Patrick McHardy, netfilter-devel

On Fri, May 21, 2010 at 1:44 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
> Do you have an idea of the depth of a rb tree with 1 million entries ?
>
> Well sized hash table is about 25 x faster than a rb tree in this case
> for pure lookups, and inserts and deletes in hash table are about 100x
> faster in this case.
>
> hash table : one or two cache misses per lookup or inserts/deletes
>
> rbtree with one million entries : about 25 caches misses per lookup, and
> maybe 100 cache misses per insert/delete.
>
>

and we have to do insertion and deletion in serial with rbtree, so
rbtree doesn't scales as well as hash tables for parallel processing,
if there are many insertions and deletions operations.


-- 
Regards,
Changli Gao(xiaosuo@gmail.com)
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Any Performance benchmark on a Million conntracks
  2010-05-20 23:43         ` Changli Gao
@ 2010-05-21  2:34           ` Eric Dumazet
  2010-05-21  3:06             ` Changli Gao
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Dumazet @ 2010-05-21  2:34 UTC (permalink / raw)
  To: Changli Gao; +Cc: Anand Raj Manickam, Patrick McHardy, netfilter-devel

Le vendredi 21 mai 2010 à 07:43 +0800, Changli Gao a écrit :
> On Fri, May 21, 2010 at 1:44 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> >
> > Do you have an idea of the depth of a rb tree with 1 million entries ?
> >
> > Well sized hash table is about 25 x faster than a rb tree in this case
> > for pure lookups, and inserts and deletes in hash table are about 100x
> > faster in this case.
> >
> > hash table : one or two cache misses per lookup or inserts/deletes
> >
> > rbtree with one million entries : about 25 caches misses per lookup, and
> > maybe 100 cache misses per insert/delete.
> >
> >
> 
> and we have to do insertion and deletion in serial with rbtree, so
> rbtree doesn't scales as well as hash tables for parallel processing,
> if there are many insertions and deletions operations.
> 
> 

Before saying such things, you should read the source code Changli,
because you only propagate wrong information.

conntrack uses a single lock, so inserts and deletes _are_ serialized,
_even_ with hash table.



--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Any Performance benchmark on a Million conntracks
  2010-05-21  2:34           ` Eric Dumazet
@ 2010-05-21  3:06             ` Changli Gao
  0 siblings, 0 replies; 9+ messages in thread
From: Changli Gao @ 2010-05-21  3:06 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Anand Raj Manickam, Patrick McHardy, netfilter-devel

On Fri, May 21, 2010 at 10:34 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Le vendredi 21 mai 2010 à 07:43 +0800, Changli Gao a écrit :
>>
>> and we have to do insertion and deletion in serial with rbtree, so
>> rbtree doesn't scales as well as hash tables for parallel processing,
>> if there are many insertions and deletions operations.
>>
>>
>
> Before saying such things, you should read the source code Changli,
> because you only propagate wrong information.
>
> conntrack uses a single lock, so inserts and deletes _are_ serialized,
> _even_ with hash table.
>

Sorry. I should add "Off Topic", as I said that for common cases, not
for conntracking.

-- 
Regards,
Changli Gao(xiaosuo@gmail.com)
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Any Performance benchmark on a Million conntracks
  2010-05-20 12:51 Any Performance benchmark on a Million conntracks Anand Raj Manickam
  2010-05-20 13:04 ` Eric Dumazet
@ 2010-05-21 14:05 ` Simon Lodal
  1 sibling, 0 replies; 9+ messages in thread
From: Simon Lodal @ 2010-05-21 14:05 UTC (permalink / raw)
  To: Anand Raj Manickam; +Cc: netfilter-devel


FWIW, the load on my routers is regularly 400k conns, 5k inserts + 5k deletes, 
and 280 kpps traffic (140 kpps in each direction). There are certain scalability 
issues that I am investigating, but they do not seem related to conntracking 
at all.

I have configured for 8 million entries (2m buckets).

Torsdag 20 maj 2010 14:51:45 skrev Anand Raj Manickam:
> Hi,
> Is there any performance bench mark on conntrack response to 1 million
> conntrack entries in the conntrack table.
> Since conntrack uses Hashing to lookup the entries i had some doubts
> on the scalability. Can someone shed some light please?
> Thanks,
> Anand
> --
> To unsubscribe from this list: send the line "unsubscribe netfilter-devel"
> in the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-05-21 14:05 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-20 12:51 Any Performance benchmark on a Million conntracks Anand Raj Manickam
2010-05-20 13:04 ` Eric Dumazet
2010-05-20 14:03   ` Patrick McHardy
2010-05-20 17:13     ` Anand Raj Manickam
2010-05-20 17:44       ` Eric Dumazet
2010-05-20 23:43         ` Changli Gao
2010-05-21  2:34           ` Eric Dumazet
2010-05-21  3:06             ` Changli Gao
2010-05-21 14:05 ` Simon Lodal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).