netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Possible DoS with 6RD border relay
@ 2012-01-04 16:48 Brent Cook
  2012-01-04 17:02 ` Brent Cook
  2012-01-04 17:02 ` Eric Dumazet
  0 siblings, 2 replies; 9+ messages in thread
From: Brent Cook @ 2012-01-04 16:48 UTC (permalink / raw)
  To: netdev

Hi All,

  I have been doing some testing of Linux serving as a 6RD border relay. It 
seems that if a client sends 6RD-encapsulated packets and varies the lower 64-
bits of the 6RD address over the range of the neighbor table size (the bits 
below the delegated prefix), it causes the neighbor table to quickly overflow. 
However, viewing the neighbor table never shows more than a handful of 
entries. When the neighbor table overflows, packet routing on my test system 
slows from 1Gbps to a couple of Mbps at most.

[28765.764079] net_ratelimit: 32003 callbacks suppressed
[28765.764084] ipv6: Neighbour table overflow.
[28765.764171] ipv6: Neighbour table overflow.

root@target1:~# ip neigh
fe80::1a:c5ff:fe02:2 dev test2  router FAILED
2001:1234::3 dev test2 lladdr 02:1a:c5:02:00:02 REACHABLE
192.168.2.1 dev mgmt0 lladdr 04:7d:7b:06:8d:2d REACHABLE
1.0.0.1 dev test0 lladdr 02:1a:c5:01:00:00 REACHABLE

If I send packets much more slowly, the system works as expected. If the 6RD 
client sends from a constant address rather than varying the lower bits, it 
also works fine. I tested the two neighbor table checks in sit.c and 

The network topology looks something like this:

6RD client -> Router -> Linux (6RD BR) -> IPv6 host

The 6RD client is at 1.1.1.1/24
The Linux BR is at 1.0.0.2/24, the IPv4 router is at 1.0.0.1/24 and the IPv6 
host is directly attached on a second physical interface at address 
2001:1234::3

A configuration script for configuring the BR follows:

#!/bin/bash
PREFIX1="2001:0db8"                  # 6rd ipv6 prefix
intf1=test0
intf2=test2

modprobe sit

## Setup the tunnel, it will create an interface named '6rd'
ip addr add 1.0.0.2/24 dev $intf1
ip link set $intf1 up
sudo ip route add 1.1.1.0/24 via 1.0.0.1
ip addr add 2001:1234::1/64 dev $intf2
ip link set $intf2 up
ip tunnel add 6rd mode sit local 1.0.0.2 dev $intf1 ttl 64
ip tunnel 6rd dev 6rd 6rd-prefix ${PREFIX1}::/32
ip addr add ${PREFIX1}::1/32 dev 6rd
ip link set 6rd up

sysctl -w net.ipv6.conf.all.forwarding=1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Possible DoS with 6RD border relay
  2012-01-04 16:48 Possible DoS with 6RD border relay Brent Cook
@ 2012-01-04 17:02 ` Brent Cook
  2012-01-04 17:25   ` Eric Dumazet
  2012-01-04 17:02 ` Eric Dumazet
  1 sibling, 1 reply; 9+ messages in thread
From: Brent Cook @ 2012-01-04 17:02 UTC (permalink / raw)
  To: netdev

On Wednesday, January 04, 2012 10:48:44 AM Brent Cook wrote:
> Hi All,
> 
>   I have been doing some testing of Linux serving as a 6RD border relay. It
> seems that if a client sends 6RD-encapsulated packets and varies the lower
> 64- bits of the 6RD address over the range of the neighbor table size (the
> bits below the delegated prefix), it causes the neighbor table to quickly
> overflow. However, viewing the neighbor table never shows more than a
> handful of entries. When the neighbor table overflows, packet routing on
> my test system slows from 1Gbps to a couple of Mbps at most.

I forgot to mention, I'm testing 3.2 rc7:

Linux target1 3.2.0-7-generic #13-Ubuntu SMP Sat Dec 24 18:06:57 UTC 2011 
x86_64 x86_64 x86_64 GNU/Linux

but the same behavior occurs with 2.6.35

> [28765.764079] net_ratelimit: 32003 callbacks suppressed
> [28765.764084] ipv6: Neighbour table overflow.
> [28765.764171] ipv6: Neighbour table overflow.
> 
> root@target1:~# ip neigh
> fe80::1a:c5ff:fe02:2 dev test2  router FAILED
> 2001:1234::3 dev test2 lladdr 02:1a:c5:02:00:02 REACHABLE
> 192.168.2.1 dev mgmt0 lladdr 04:7d:7b:06:8d:2d REACHABLE
> 1.0.0.1 dev test0 lladdr 02:1a:c5:01:00:00 REACHABLE
> 
> If I send packets much more slowly, the system works as expected. If the
> 6RD client sends from a constant address rather than varying the lower
> bits, it also works fine. I tested the two neighbor table checks in sit.c
> and

continuing thought: ipip6_tunnel_xmit does not appear to be hitting the ISATAP 
or !dst sections that might invoke a call to dst_get_neighbour().

> The network topology looks something like this:
> 
> 6RD client -> Router -> Linux (6RD BR) -> IPv6 host
> 
> The 6RD client is at 1.1.1.1/24
> The Linux BR is at 1.0.0.2/24, the IPv4 router is at 1.0.0.1/24 and the
> IPv6 host is directly attached on a second physical interface at address
> 2001:1234::3
> 
> A configuration script for configuring the BR follows:
> 
> #!/bin/bash
> PREFIX1="2001:0db8"                  # 6rd ipv6 prefix
> intf1=test0
> intf2=test2
> 
> modprobe sit
> 
> ## Setup the tunnel, it will create an interface named '6rd'
> ip addr add 1.0.0.2/24 dev $intf1
> ip link set $intf1 up
> sudo ip route add 1.1.1.0/24 via 1.0.0.1
> ip addr add 2001:1234::1/64 dev $intf2
> ip link set $intf2 up
> ip tunnel add 6rd mode sit local 1.0.0.2 dev $intf1 ttl 64
> ip tunnel 6rd dev 6rd 6rd-prefix ${PREFIX1}::/32
> ip addr add ${PREFIX1}::1/32 dev 6rd
> ip link set 6rd up
> 
> sysctl -w net.ipv6.conf.all.forwarding=1
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Possible DoS with 6RD border relay
  2012-01-04 16:48 Possible DoS with 6RD border relay Brent Cook
  2012-01-04 17:02 ` Brent Cook
@ 2012-01-04 17:02 ` Eric Dumazet
  1 sibling, 0 replies; 9+ messages in thread
From: Eric Dumazet @ 2012-01-04 17:02 UTC (permalink / raw)
  To: Brent Cook; +Cc: netdev

Le mercredi 04 janvier 2012 à 10:48 -0600, Brent Cook a écrit :
> Hi All,
> 
>   I have been doing some testing of Linux serving as a 6RD border relay. It 
> seems that if a client sends 6RD-encapsulated packets and varies the lower 64-
> bits of the 6RD address over the range of the neighbor table size (the bits 
> below the delegated prefix), it causes the neighbor table to quickly overflow. 
> However, viewing the neighbor table never shows more than a handful of 
> entries. When the neighbor table overflows, packet routing on my test system 
> slows from 1Gbps to a couple of Mbps at most.
> 
> [28765.764079] net_ratelimit: 32003 callbacks suppressed
> [28765.764084] ipv6: Neighbour table overflow.
> [28765.764171] ipv6: Neighbour table overflow.
> 
> root@target1:~# ip neigh
> fe80::1a:c5ff:fe02:2 dev test2  router FAILED
> 2001:1234::3 dev test2 lladdr 02:1a:c5:02:00:02 REACHABLE
> 192.168.2.1 dev mgmt0 lladdr 04:7d:7b:06:8d:2d REACHABLE
> 1.0.0.1 dev test0 lladdr 02:1a:c5:01:00:00 REACHABLE
> 
> If I send packets much more slowly, the system works as expected. If the 6RD 
> client sends from a constant address rather than varying the lower bits, it 
> also works fine. I tested the two neighbor table checks in sit.c and 
> 
> The network topology looks something like this:
> 
> 6RD client -> Router -> Linux (6RD BR) -> IPv6 host
> 
> The 6RD client is at 1.1.1.1/24
> The Linux BR is at 1.0.0.2/24, the IPv4 router is at 1.0.0.1/24 and the IPv6 
> host is directly attached on a second physical interface at address 
> 2001:1234::3
> 
> A configuration script for configuring the BR follows:
> 
> #!/bin/bash
> PREFIX1="2001:0db8"                  # 6rd ipv6 prefix
> intf1=test0
> intf2=test2
> 
> modprobe sit
> 
> ## Setup the tunnel, it will create an interface named '6rd'
> ip addr add 1.0.0.2/24 dev $intf1
> ip link set $intf1 up
> sudo ip route add 1.1.1.0/24 via 1.0.0.1
> ip addr add 2001:1234::1/64 dev $intf2
> ip link set $intf2 up
> ip tunnel add 6rd mode sit local 1.0.0.2 dev $intf1 ttl 64
> ip tunnel 6rd dev 6rd 6rd-prefix ${PREFIX1}::/32
> ip addr add ${PREFIX1}::1/32 dev 6rd
> ip link set 6rd up
> 
> sysctl -w net.ipv6.conf.all.forwarding=1

What kernel version do you use ?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Possible DoS with 6RD border relay
  2012-01-04 17:02 ` Brent Cook
@ 2012-01-04 17:25   ` Eric Dumazet
  2012-01-04 17:35     ` Brent Cook
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Dumazet @ 2012-01-04 17:25 UTC (permalink / raw)
  To: Brent Cook; +Cc: netdev

Le mercredi 04 janvier 2012 à 11:02 -0600, Brent Cook a écrit :

> I forgot to mention, I'm testing 3.2 rc7:
> 
> Linux target1 3.2.0-7-generic #13-Ubuntu SMP Sat Dec 24 18:06:57 UTC 2011 
> x86_64 x86_64 x86_64 GNU/Linux
> 
> but the same behavior occurs with 2.6.35

Please check :

grep . /proc/sys/net/ipv6/route/*

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Possible DoS with 6RD border relay
  2012-01-04 17:25   ` Eric Dumazet
@ 2012-01-04 17:35     ` Brent Cook
  2012-01-04 17:53       ` Eric Dumazet
  0 siblings, 1 reply; 9+ messages in thread
From: Brent Cook @ 2012-01-04 17:35 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

On Wednesday, January 04, 2012 11:25:31 AM Eric Dumazet wrote:
> Le mercredi 04 janvier 2012 à 11:02 -0600, Brent Cook a écrit :
> > I forgot to mention, I'm testing 3.2 rc7:
> > 
> > Linux target1 3.2.0-7-generic #13-Ubuntu SMP Sat Dec 24 18:06:57 UTC 2011
> > x86_64 x86_64 x86_64 GNU/Linux
> > 
> > but the same behavior occurs with 2.6.35
> 
> Please check :
> 
> grep . /proc/sys/net/ipv6/route/*

/proc/sys/net/ipv6/route/gc_elasticity:9
/proc/sys/net/ipv6/route/gc_interval:30
/proc/sys/net/ipv6/route/gc_min_interval:0
/proc/sys/net/ipv6/route/gc_min_interval_ms:500
/proc/sys/net/ipv6/route/gc_thresh:1024
/proc/sys/net/ipv6/route/gc_timeout:60
/proc/sys/net/ipv6/route/max_size:4096
/proc/sys/net/ipv6/route/min_adv_mss:1220
/proc/sys/net/ipv6/route/mtu_expires:600

This is a system with 8GB of ram.

If I modify gc_thresh to be >= the number of bits the client varies, the 
system works OK:

/proc/sys/net/ipv6/neigh/default/gc_thresh1:128
/proc/sys/net/ipv6/neigh/default/gc_thresh2:512
/proc/sys/net/ipv6/neigh/default/gc_thresh3:1024

root@target1:~# echo 200000 > /proc/sys/net/ipv6/neigh/default/gc_thresh1
root@target1:~# echo 200000 > /proc/sys/net/ipv6/neigh/default/gc_thresh2
root@target1:~# echo 200000 > /proc/sys/net/ipv6/neigh/default/gc_thresh3

But it seems to be a losing battle since the client has a delegated prefix of 
/64.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Possible DoS with 6RD border relay
  2012-01-04 17:35     ` Brent Cook
@ 2012-01-04 17:53       ` Eric Dumazet
  2012-01-04 19:26         ` Brent Cook
  0 siblings, 1 reply; 9+ messages in thread
From: Eric Dumazet @ 2012-01-04 17:53 UTC (permalink / raw)
  To: Brent Cook; +Cc: netdev

Le mercredi 04 janvier 2012 à 11:35 -0600, Brent Cook a écrit :
> On Wednesday, January 04, 2012 11:25:31 AM Eric Dumazet wrote:
> > Le mercredi 04 janvier 2012 à 11:02 -0600, Brent Cook a écrit :
> > > I forgot to mention, I'm testing 3.2 rc7:
> > > 
> > > Linux target1 3.2.0-7-generic #13-Ubuntu SMP Sat Dec 24 18:06:57 UTC 2011
> > > x86_64 x86_64 x86_64 GNU/Linux
> > > 
> > > but the same behavior occurs with 2.6.35
> > 
> > Please check :
> > 
> > grep . /proc/sys/net/ipv6/route/*
> 
> /proc/sys/net/ipv6/route/gc_elasticity:9
> /proc/sys/net/ipv6/route/gc_interval:30
> /proc/sys/net/ipv6/route/gc_min_interval:0
> /proc/sys/net/ipv6/route/gc_min_interval_ms:500
> /proc/sys/net/ipv6/route/gc_thresh:1024
> /proc/sys/net/ipv6/route/gc_timeout:60
> /proc/sys/net/ipv6/route/max_size:4096
> /proc/sys/net/ipv6/route/min_adv_mss:1220
> /proc/sys/net/ipv6/route/mtu_expires:600
> 
> This is a system with 8GB of ram.
> 
> If I modify gc_thresh to be >= the number of bits the client varies, the 
> system works OK:
> 
> /proc/sys/net/ipv6/neigh/default/gc_thresh1:128
> /proc/sys/net/ipv6/neigh/default/gc_thresh2:512
> /proc/sys/net/ipv6/neigh/default/gc_thresh3:1024
> 
> root@target1:~# echo 200000 > /proc/sys/net/ipv6/neigh/default/gc_thresh1
> root@target1:~# echo 200000 > /proc/sys/net/ipv6/neigh/default/gc_thresh2
> root@target1:~# echo 200000 > /proc/sys/net/ipv6/neigh/default/gc_thresh3
> 
> But it seems to be a losing battle since the client has a delegated prefix of 
> /64.

I am not sure of this.

Try to change /proc/sys/net/ipv6/route/max_size

and /proc/sys/net/ipv6/route/gc_thresh

[To something larger than number of in flight packets on your gateway ]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Possible DoS with 6RD border relay
  2012-01-04 17:53       ` Eric Dumazet
@ 2012-01-04 19:26         ` Brent Cook
  2012-01-05  4:22           ` Brent Cook
  0 siblings, 1 reply; 9+ messages in thread
From: Brent Cook @ 2012-01-04 19:26 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

On Wednesday, January 04, 2012 11:53:20 AM Eric Dumazet wrote:
> Le mercredi 04 janvier 2012 à 11:35 -0600, Brent Cook a écrit :
> > On Wednesday, January 04, 2012 11:25:31 AM Eric Dumazet wrote:
> > > Le mercredi 04 janvier 2012 à 11:02 -0600, Brent Cook a écrit :
> > > > I forgot to mention, I'm testing 3.2 rc7:
> > > > 
> > > > Linux target1 3.2.0-7-generic #13-Ubuntu SMP Sat Dec 24 18:06:57 UTC
> > > > 2011 x86_64 x86_64 x86_64 GNU/Linux
> > > > 
> > > > but the same behavior occurs with 2.6.35
> > > 
> > > Please check :
> > > 
> > > grep . /proc/sys/net/ipv6/route/*
> > 
> > /proc/sys/net/ipv6/route/gc_elasticity:9
> > /proc/sys/net/ipv6/route/gc_interval:30
> > /proc/sys/net/ipv6/route/gc_min_interval:0
> > /proc/sys/net/ipv6/route/gc_min_interval_ms:500
> > /proc/sys/net/ipv6/route/gc_thresh:1024
> > /proc/sys/net/ipv6/route/gc_timeout:60
> > /proc/sys/net/ipv6/route/max_size:4096
> > /proc/sys/net/ipv6/route/min_adv_mss:1220
> > /proc/sys/net/ipv6/route/mtu_expires:600
> > 
> > This is a system with 8GB of ram.
> > 
> > If I modify gc_thresh to be >= the number of bits the client varies, the
> > system works OK:
> > 
> > /proc/sys/net/ipv6/neigh/default/gc_thresh1:128
> > /proc/sys/net/ipv6/neigh/default/gc_thresh2:512
> > /proc/sys/net/ipv6/neigh/default/gc_thresh3:1024
> > 
> > root@target1:~# echo 200000 > /proc/sys/net/ipv6/neigh/default/gc_thresh1
> > root@target1:~# echo 200000 > /proc/sys/net/ipv6/neigh/default/gc_thresh2
> > root@target1:~# echo 200000 > /proc/sys/net/ipv6/neigh/default/gc_thresh3
> > 
> > But it seems to be a losing battle since the client has a delegated
> > prefix of /64.
> 
> I am not sure of this.
> 
> Try to change /proc/sys/net/ipv6/route/max_size
> 
> and /proc/sys/net/ipv6/route/gc_thresh
> 
> [To something larger than number of in flight packets on your gateway ]

Thanks for the suggestion, I tried 200k:

root@target1:~# echo 200000 > /proc/sys/net/ipv6/route/max_size 
root@target1:~# echo 200000 > /proc/sys/net/ipv6/route/gc_thresh 

It did not seem to improve the behavior - once neighbor table overflow hits, 
things go downhill.  So far, only modifying the neighbor cache threshold seems 
to improve things.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Possible DoS with 6RD border relay
  2012-01-04 19:26         ` Brent Cook
@ 2012-01-05  4:22           ` Brent Cook
  2012-01-05 19:20             ` David Miller
  0 siblings, 1 reply; 9+ messages in thread
From: Brent Cook @ 2012-01-05  4:22 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

On Wednesday, January 04, 2012 01:26:04 PM Brent Cook wrote:
> On Wednesday, January 04, 2012 11:53:20 AM Eric Dumazet wrote:
> > 
> > I am not sure of this.
> > 
> > Try to change /proc/sys/net/ipv6/route/max_size
> > 
> > and /proc/sys/net/ipv6/route/gc_thresh
> > 
> > [To something larger than number of in flight packets on your gateway ]
> 
> Thanks for the suggestion, I tried 200k:
> 
> root@target1:~# echo 200000 > /proc/sys/net/ipv6/route/max_size
> root@target1:~# echo 200000 > /proc/sys/net/ipv6/route/gc_thresh
> 
> It did not seem to improve the behavior - once neighbor table overflow
> hits, things go downhill.  So far, only modifying the neighbor cache
> threshold seems to improve things.

After some more examination, it appears that the extra neighbor entries are 
only allocated for traffic flowing from the native IPv6 host to the 6rd 
client. Packets generated from the 6rd client to the native IPv6 host did not 
experience a problem by themselves. I originally tested bidirectional TCP 
traffic, but switching to unidirection UDP to isolate the routing paths.

 Pid: 0, comm: swapper/3 Not tainted 3.2.0-rc7 #8
 Call Trace:
  <IRQ>  
  [<ffffffffa013a476>] ? rt6_bind_peer+0x36/0x80 [ipv6]
  [<ffffffff8153bed0>] neigh_create+0x30/0x550  
  [<ffffffff8153923d>] ? neigh_lookup+0xcd/0x100  
  [<ffffffffa013a182>] rt6_alloc_cow+0x202/0x240 [ipv6]  
  [<ffffffffa013aa0b>] ip6_pol_route.isra.36+0x38b/0x3a0 [ipv6]
  [<ffffffffa013aa7d>] ip6_pol_route_input+0x2d/0x30 [ipv6]
  [<ffffffffa015caa1>] fib6_rule_action+0xd1/0x1f0 [ipv6]
  [<ffffffffa013aa50>] ? ip6_pol_route_output+0x30/0x30 [ipv6]
  [<ffffffff815316b1>] ? dev_queue_xmit+0x1c1/0x630
  [<ffffffff81546acd>] fib_rules_lookup+0xcd/0x150
  [<ffffffffa015ce64>] fib6_rule_lookup+0x44/0x80 [ipv6]
  [<ffffffffa013aa50>] ? ip6_pol_route_output+0x30/0x30 [ipv6]
  [<ffffffffa013ab44>] ip6_route_input+0xc4/0xf0 [ipv6]
  [<ffffffffa0130177>] ipv6_rcv+0x317/0x3c0 [ipv6]
  [<ffffffff8152ed1a>] __netif_receive_skb+0x51a/0x5c0  
  [<ffffffff8152f990>] netif_receive_skb+0x80/0x90  
  [<ffffffff8152fd89>] ? dev_gro_receive+0x1b9/0x2c0
  [<ffffffff8152fad0>] napi_skb_finish+0x50/0x70
  [<ffffffff81530005>] napi_gro_receive+0xb5/0xc0
  [<ffffffffa001034b>] e1000_receive_skb+0x5b/0x70 [e1000e]
  [<ffffffffa0012122>] e1000_clean_rx_irq+0x352/0x460 [e1000e]
  [<ffffffffa00117f8>] e1000_clean+0x78/0x2b0 [e1000e]
  [<ffffffff81530214>] net_rx_action+0x134/0x290
  [<ffffffff8106c4f8>] __do_softirq+0xa8/0x210    
  [<ffffffff8160736e>] ? _raw_spin_lock+0xe/0x20
  [<ffffffff816116ac>] call_softirq+0x1c/0x30  
  [<ffffffff81015195>] do_softirq+0x65/0xa0
  [<ffffffff8106c8de>] irq_exit+0x8e/0xb0  
  [<ffffffff81611f63>] do_IRQ+0x63/0xe0
  [<ffffffff8160782e>] common_interrupt+0x6e/0x6e

Is this expected behavior? All of the peers in this case are really the same 
6RD client - it's really simulating a customer edge router with a few thousand 
hosts behind it. I suspect that adding a static route entry for the CE's 
prefix via 'sit' will also make the problem go away.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Possible DoS with 6RD border relay
  2012-01-05  4:22           ` Brent Cook
@ 2012-01-05 19:20             ` David Miller
  0 siblings, 0 replies; 9+ messages in thread
From: David Miller @ 2012-01-05 19:20 UTC (permalink / raw)
  To: bcook; +Cc: eric.dumazet, netdev

From: Brent Cook <bcook@breakingpoint.com>
Date: Wed, 4 Jan 2012 22:22:52 -0600

> Is this expected behavior? All of the peers in this case are really the same 
> 6RD client - it's really simulating a customer edge router with a few thousand 
> hosts behind it. I suspect that adding a static route entry for the CE's 
> prefix via 'sit' will also make the problem go away.

Any route which refers to more than one exact host as the destination must
be cloned or COW'd.

This is so that we can provide unique metrics for the destination, as
well as a unique neighbour for the nexthop.

In your case, the keys used to lookup the nexthop for all of these
routes must be different, otherwise you wouldn't hit the neighbour
table limits.  Since if they were all the same we'd find an existing
neighbour entry and just bump it's reference count.

Longer term I intend to make the ipv4 and ipv6 routes not take a
reference to the neighbour entries.  The neighbour entries will be
"refcount-less", and only we'll just use the neighbour entry at packet
output time using a low latency fast lookup inside of a tight RCU
protected code sequence.

So all of this "neighbour table overflow" crap will just disappear.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2012-01-05 19:21 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-01-04 16:48 Possible DoS with 6RD border relay Brent Cook
2012-01-04 17:02 ` Brent Cook
2012-01-04 17:25   ` Eric Dumazet
2012-01-04 17:35     ` Brent Cook
2012-01-04 17:53       ` Eric Dumazet
2012-01-04 19:26         ` Brent Cook
2012-01-05  4:22           ` Brent Cook
2012-01-05 19:20             ` David Miller
2012-01-04 17:02 ` Eric Dumazet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).