* Re: TCP 2MSL on loopback [not found] <45EBFD13.1060106@symas.com> @ 2007-03-05 14:28 ` Eric Dumazet 2007-03-05 15:09 ` [PATCH] twcal_jiffie should be unsigned long, not int Eric Dumazet 2007-03-06 9:22 ` TCP 2MSL on loopback Howard Chu 0 siblings, 2 replies; 20+ messages in thread From: Eric Dumazet @ 2007-03-05 14:28 UTC (permalink / raw) To: Howard Chu; +Cc: linux-kernel, netdev On Monday 05 March 2007 12:20, Howard Chu wrote: > Why is the Maximum Segment Lifetime a global parameter? Surely the > maximum possible lifetime of a particular TCP segment depends on the > actual connection. At the very least, it would be useful to be able to > set it on a per-interface basis. E.g., in the case of the loopback > interface, it would be useful to be able to set it to a very small > duration. Hi Howard I think you should address these questions on netdev instead of linux-kernel. > > As I note in this draft > http://www.ietf.org/internet-drafts/draft-chu-ldap-ldapi-00.txt > when doing a connection soak test of OpenLDAP using clients connected > through localhost, the entire port range is exhausted in well under a > second, at which point the test stalls until a port comes out of > TIME_WAIT state so the next connection can be opened. > > These days it's not uncommon for an OpenLDAP slapd server to handle tens > of thousands of connections per second in real use (e.g., at Google, or > at various telcos). While the LDAP server is fast enough to saturate > even 10gbit ethernet using contemporary CPUs, we have to resort to > multiple virtual interfaces just to make sure we have enough port > numbers available. > I dont uderstand... doesnt slapd server listen for connections on a given port, like http ? Or is it doing connections like a ftp server ? Of course, if you want to open more than 60.000 concurrent connections, using 127.0.0.1 address, you might have a problem... > Ideally the 2MSL parameter would be dynamically adjusted based on the > route to the destination and the weights associated with those routes. > In the simplest case, connections between machines on the same subnet > (i.e., no router hops involved) should have a much smaller default value > than connections that traverse any routers. I'd settle for a two-level > setting - with no router hops, use the small value; with any router hops > use the large value. Well, is it really a MSL problem ? I did a small test (linux-2.6.21-rc1) and was able to get 1.000.000 connections on localhost on my dual proc machine in one minute, without an error. ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH] twcal_jiffie should be unsigned long, not int 2007-03-05 14:28 ` TCP 2MSL on loopback Eric Dumazet @ 2007-03-05 15:09 ` Eric Dumazet 2007-03-05 21:33 ` David Miller 2007-03-06 9:22 ` TCP 2MSL on loopback Howard Chu 1 sibling, 1 reply; 20+ messages in thread From: Eric Dumazet @ 2007-03-05 15:09 UTC (permalink / raw) To: David Miller, netdev; +Cc: Arnaldo Carvalho de Melo [-- Attachment #1: Type: text/plain, Size: 430 bytes --] Hi David While browsing include/net/inet_timewait_sock.h, I found this buggy definition of twcal_jiffie. int twcal_jiffie; I wonder how inet_twdr_twcal_tick() can really works on x86_64 This seems quite an old bug, it was there before introduction of inet_timewait_death_row made by Arnaldo Carvalho de Melo. [PATCH] twcal_jiffie should be unsigned long, not int Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> [-- Attachment #2: twcal_jiffie.patch --] [-- Type: text/plain, Size: 477 bytes --] diff --git a/include/net/inet_timewait_sock.h b/include/net/inet_timewait_sock.h index f7be1ac..09a2532 100644 --- a/include/net/inet_timewait_sock.h +++ b/include/net/inet_timewait_sock.h @@ -66,7 +66,7 @@ #define INET_TWDR_TWKILL_QUOTA 100 struct inet_timewait_death_row { /* Short-time timewait calendar */ int twcal_hand; - int twcal_jiffie; + unsigned long twcal_jiffie; struct timer_list twcal_timer; struct hlist_head twcal_row[INET_TWDR_RECYCLE_SLOTS]; ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH] twcal_jiffie should be unsigned long, not int 2007-03-05 15:09 ` [PATCH] twcal_jiffie should be unsigned long, not int Eric Dumazet @ 2007-03-05 21:33 ` David Miller 0 siblings, 0 replies; 20+ messages in thread From: David Miller @ 2007-03-05 21:33 UTC (permalink / raw) To: dada1; +Cc: netdev, acme From: Eric Dumazet <dada1@cosmosbay.com> Date: Mon, 5 Mar 2007 16:09:21 +0100 > While browsing include/net/inet_timewait_sock.h, I found this buggy definition > of twcal_jiffie. > > int twcal_jiffie; > > I wonder how inet_twdr_twcal_tick() can really works on x86_64 > > This seems quite an old bug, it was there before introduction of > inet_timewait_death_row made by Arnaldo Carvalho de Melo. > > [PATCH] twcal_jiffie should be unsigned long, not int > > Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Grrr, good catch Eric. I'll push this fix to -stable too. Thanks a lot. ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-05 14:28 ` TCP 2MSL on loopback Eric Dumazet 2007-03-05 15:09 ` [PATCH] twcal_jiffie should be unsigned long, not int Eric Dumazet @ 2007-03-06 9:22 ` Howard Chu 2007-03-06 10:42 ` Eric Dumazet ` (2 more replies) 1 sibling, 3 replies; 20+ messages in thread From: Howard Chu @ 2007-03-06 9:22 UTC (permalink / raw) To: Eric Dumazet; +Cc: netdev Eric Dumazet wrote: > On Monday 05 March 2007 12:20, Howard Chu wrote: >> Why is the Maximum Segment Lifetime a global parameter? Surely the >> maximum possible lifetime of a particular TCP segment depends on the >> actual connection. At the very least, it would be useful to be able to >> set it on a per-interface basis. E.g., in the case of the loopback >> interface, it would be useful to be able to set it to a very small >> duration. > > Hi Howard > > I think you should address these questions on netdev instead of linux-kernel. OK, I just subscribed to netdev... >> As I note in this draft >> http://www.ietf.org/internet-drafts/draft-chu-ldap-ldapi-00.txt >> when doing a connection soak test of OpenLDAP using clients connected >> through localhost, the entire port range is exhausted in well under a >> second, at which point the test stalls until a port comes out of >> TIME_WAIT state so the next connection can be opened. >> >> These days it's not uncommon for an OpenLDAP slapd server to handle tens >> of thousands of connections per second in real use (e.g., at Google, or >> at various telcos). While the LDAP server is fast enough to saturate >> even 10gbit ethernet using contemporary CPUs, we have to resort to >> multiple virtual interfaces just to make sure we have enough port >> numbers available. > I dont uderstand... doesnt slapd server listen for connections on a given > port, like http ? Or is it doing connections like a ftp server ? No, you're right, it listens on a single port. There is a standard port (389) though of course you can use any port you want. > > Of course, if you want to open more than 60.000 concurrent connections, using > 127.0.0.1 address, you might have a problem... This is probably not something that happens in real world deployments. I But it's not 60,000 concurrent connections, it's 60,000 within a 2 minute span. I'm not saying this is a high priority problem, I only encountered it in a test scenario where I was deliberately trying to max out the server. >> Ideally the 2MSL parameter would be dynamically adjusted based on the >> route to the destination and the weights associated with those routes. >> In the simplest case, connections between machines on the same subnet >> (i.e., no router hops involved) should have a much smaller default value >> than connections that traverse any routers. I'd settle for a two-level >> setting - with no router hops, use the small value; with any router hops >> use the large value. > > Well, is it really a MSL problem ? > I did a small test (linux-2.6.21-rc1) and was able to get 1.000.000 > connections on localhost on my dual proc machine in one minute, without an > error. It's a combination of 2MSL and /proc/sys/net/ipv4/ip_local_port_range - on my system the default port range is 32768-61000. That means if I use up 28232 ports in less than 2MSL then everything stops. netstat will show that all the available port numbers are in TIME_WAIT state. And this is particularly bad because while waiting for the timeout, I can't initiate any new outbound connections of any kind at all - telnet, ssh, whatever, you have to wait for at least one port to free up. (Interesting denial of service there....) Granted, I was running my test on 2.6.18, perhaps 2.6.21 behaves differently. -- -- Howard Chu Chief Architect, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc Chief Architect, OpenLDAP http://www.openldap.org/project/ ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 9:22 ` TCP 2MSL on loopback Howard Chu @ 2007-03-06 10:42 ` Eric Dumazet 2007-03-06 18:39 ` Howard Chu 2007-03-06 18:04 ` David Miller 2007-03-06 18:46 ` Rick Jones 2 siblings, 1 reply; 20+ messages in thread From: Eric Dumazet @ 2007-03-06 10:42 UTC (permalink / raw) To: Howard Chu; +Cc: netdev [-- Attachment #1: Type: text/plain, Size: 937 bytes --] On Tuesday 06 March 2007 10:22, Howard Chu wrote: > > It's a combination of 2MSL and /proc/sys/net/ipv4/ip_local_port_range - > on my system the default port range is 32768-61000. That means if I use > up 28232 ports in less than 2MSL then everything stops. netstat will > show that all the available port numbers are in TIME_WAIT state. And > this is particularly bad because while waiting for the timeout, I can't > initiate any new outbound connections of any kind at all - telnet, ssh, > whatever, you have to wait for at least one port to free up. > (Interesting denial of service there....) > > Granted, I was running my test on 2.6.18, perhaps 2.6.21 behaves > differently. Could you try this attached program and tell me whats happen ? $ gcc -O2 -o socktest socktest.c -lpthread $ time ./socktest -n 100000 nb_conn=99999 nb_accp=99999 real 0m5.058s user 0m0.212s sys 0m4.844s (on my small machine, dell d610 :) ) [-- Attachment #2: socktest.c --] [-- Type: text/plain, Size: 3408 bytes --] /* Copyright (C) 2007 Eric Dumazet This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. */ #include <pthread.h> #include <sys/types.h> #include <sys/socket.h> #include <sys/resource.h> #include <sys/wait.h> #include <sys/ioctl.h> #include <sys/stat.h> #include <sys/time.h> #include <sys/poll.h> #include <sys/sendfile.h> #include <sys/epoll.h> #include <netinet/in.h> #include <netinet/tcp.h> #include <arpa/inet.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <string.h> #include <fcntl.h> #include <time.h> #include <ctype.h> #include <netdb.h> int fd __attribute__((aligned(64))); int port = 9999; unsigned long nb_acc __attribute__((aligned(64))); unsigned long nb_conn1 __attribute__((aligned(64))); unsigned long nb_conn2 __attribute__((aligned(64))); unsigned long nb_conn3 __attribute__((aligned(64))); int limit = 10000/3; void *do_accept(void *arg) { int s; struct sockaddr_in sa; socklen_t addrlen ; int flags; char buffer[1024]; while (1) { addrlen = sizeof(sa); s = accept(fd, (struct sockaddr *)&sa, &addrlen); if (s == -1) continue; flags = 0; recv(s, buffer, 1024, 0); send(s, "Answer\r\n", 8, 0); close(s); nb_acc++; } } void *do_conn(void *arg) { int i; int on = 1; struct sockaddr_in sa; unsigned long *cpt = (unsigned long *)arg; for (i = 0 ; i < limit ; i++) { int s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); int res; setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &on, 4); memset(&sa, 0, sizeof(sa)); sa.sin_addr.s_addr = htonl(0x7f000001); sa.sin_port = htons(port); sa.sin_family = AF_INET; res = connect(s, (struct sockaddr *)&sa, sizeof(sa)); if (res == 0) { char buffer[1024]; send(s, "question\r\n", 10, 0); recv(s, buffer, sizeof(buffer), 0); (*cpt)++; } else { static int errcnt = 0; if (errcnt++ < 10) printf("connect error %d\n", errno); } close(s); } } int main(int argc, char *argv[]) { int on = 1; struct sockaddr_in sa; pthread_t tid, tid1, tid2, tid3; int i; void *res; while ((i = getopt(argc, argv, "Vn:")) != EOF) { if (i == 'n') limit = atoi(optarg) / 3; } fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &on, 4); memset(&sa, 0, sizeof(sa)); sa.sin_port = htons(port); sa.sin_family = AF_INET; if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) == -1) { perror("bind"); return 1; } listen(fd, 30000); pthread_create(&tid, NULL, do_accept, NULL); pthread_create(&tid1, NULL, do_conn, &nb_conn1); pthread_create(&tid2, NULL, do_conn, &nb_conn2); pthread_create(&tid3, NULL, do_conn, &nb_conn3); pthread_join(tid1, &res); pthread_join(tid2, &res); pthread_join(tid3, &res); printf("nb_conn=%lu nb_accp=%lu\n", nb_conn1 + nb_conn2 + nb_conn3, nb_acc); return 0; } ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 10:42 ` Eric Dumazet @ 2007-03-06 18:39 ` Howard Chu 2007-03-06 20:07 ` Eric Dumazet 0 siblings, 1 reply; 20+ messages in thread From: Howard Chu @ 2007-03-06 18:39 UTC (permalink / raw) To: Eric Dumazet; +Cc: netdev Eric Dumazet wrote: > On Tuesday 06 March 2007 10:22, Howard Chu wrote: > >> It's a combination of 2MSL and /proc/sys/net/ipv4/ip_local_port_range - >> on my system the default port range is 32768-61000. That means if I use >> up 28232 ports in less than 2MSL then everything stops. netstat will >> show that all the available port numbers are in TIME_WAIT state. And >> this is particularly bad because while waiting for the timeout, I can't >> initiate any new outbound connections of any kind at all - telnet, ssh, >> whatever, you have to wait for at least one port to free up. >> (Interesting denial of service there....) >> >> Granted, I was running my test on 2.6.18, perhaps 2.6.21 behaves >> differently. > > Could you try this attached program and tell me whats happen ? > > $ gcc -O2 -o socktest socktest.c -lpthread > $ time ./socktest -n 100000 > nb_conn=99999 nb_accp=99999 > > real 0m5.058s > user 0m0.212s > sys 0m4.844s > > (on my small machine, dell d610 :) ) On my Asus laptop (2GHz Pentium M) the first time I ran it it completed in about 51 seconds, with no errors. I then copied it to another machine and started it up there, and got connect errors right away. I then went back to my laptop and ran it again, and got errors that time. This is the laptop run with errors: viola:~/src> uname -a Linux viola 2.6.18.2-34-default #1 SMP Mon Nov 27 11:46:27 UTC 2006 i686 i686 i386 GNU/Linux viola:~/src> time ./socktest -n 1000000 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 nb_conn=993757 nb_accp=993757 1.408u 88.649s 1:42.76 87.6% 0+0k 0+0io 0pf+0w This is my other system, an AMD X2 3800+ (dual core) mandolin:~/src> uname -a Linux mandolin 2.6.18.3SMP #9 SMP Sat Nov 25 10:08:51 PST 2006 x86_64 x86_64 x86_64 GNU/Linux mandolin:~/src> gcc -O2 -o socktest socktest.c -lpthread mandolin:~/src> time ./socktest -n 1000000 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 connect error 99 nb_conn=957088 nb_accp=957088 1.012u 630.991s 5:18.05 198.7% 0+0k 0+0io 0pf+0w -- -- Howard Chu Chief Architect, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc Chief Architect, OpenLDAP http://www.openldap.org/project/ ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 18:39 ` Howard Chu @ 2007-03-06 20:07 ` Eric Dumazet 2007-03-06 20:28 ` Howard Chu 0 siblings, 1 reply; 20+ messages in thread From: Eric Dumazet @ 2007-03-06 20:07 UTC (permalink / raw) To: Howard Chu; +Cc: netdev Howard Chu a écrit : > Eric Dumazet wrote: >> On Tuesday 06 March 2007 10:22, Howard Chu wrote: >> >>> It's a combination of 2MSL and /proc/sys/net/ipv4/ip_local_port_range - >>> on my system the default port range is 32768-61000. That means if I use >>> up 28232 ports in less than 2MSL then everything stops. netstat will >>> show that all the available port numbers are in TIME_WAIT state. And >>> this is particularly bad because while waiting for the timeout, I can't >>> initiate any new outbound connections of any kind at all - telnet, ssh, >>> whatever, you have to wait for at least one port to free up. >>> (Interesting denial of service there....) >>> >>> Granted, I was running my test on 2.6.18, perhaps 2.6.21 behaves >>> differently. >> >> Could you try this attached program and tell me whats happen ? >> >> $ gcc -O2 -o socktest socktest.c -lpthread >> $ time ./socktest -n 100000 >> nb_conn=99999 nb_accp=99999 >> >> real 0m5.058s >> user 0m0.212s >> sys 0m4.844s >> >> (on my small machine, dell d610 :) ) > > On my Asus laptop (2GHz Pentium M) the first time I ran it it completed > in about 51 seconds, with no errors. I then copied it to another machine > and started it up there, and got connect errors right away. I then went > back to my laptop and ran it again, and got errors that time. > > This is the laptop run with errors: > viola:~/src> uname -a > Linux viola 2.6.18.2-34-default #1 SMP Mon Nov 27 11:46:27 UTC 2006 i686 > i686 i386 GNU/Linux > viola:~/src> time ./socktest -n 1000000 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > nb_conn=993757 nb_accp=993757 > 1.408u 88.649s 1:42.76 87.6% 0+0k 0+0io 0pf+0w > > This is my other system, an AMD X2 3800+ (dual core) > mandolin:~/src> uname -a > Linux mandolin 2.6.18.3SMP #9 SMP Sat Nov 25 10:08:51 PST 2006 x86_64 > x86_64 x86_64 GNU/Linux > mandolin:~/src> gcc -O2 -o socktest socktest.c -lpthread > mandolin:~/src> time ./socktest -n 1000000 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > connect error 99 > nb_conn=957088 nb_accp=957088 > 1.012u 630.991s 5:18.05 198.7% 0+0k 0+0io 0pf+0w Let me see, any chance you can try the prog on 2.6.20 ? If not, please send : grep . /proc/sys/net/ipv4/* Thank you ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 20:07 ` Eric Dumazet @ 2007-03-06 20:28 ` Howard Chu 2007-03-06 20:39 ` Eric Dumazet 0 siblings, 1 reply; 20+ messages in thread From: Howard Chu @ 2007-03-06 20:28 UTC (permalink / raw) To: Eric Dumazet; +Cc: netdev Eric Dumazet wrote: > Let me see, any chance you can try the prog on 2.6.20 ? Not any time soon. > > If not, please send : > > grep . /proc/sys/net/ipv4/* This is the output on the laptop: /proc/sys/net/ipv4/icmp_echo_ignore_all:0 /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts:1 /proc/sys/net/ipv4/icmp_errors_use_inbound_ifaddr:0 /proc/sys/net/ipv4/icmp_ignore_bogus_error_responses:1 /proc/sys/net/ipv4/icmp_ratelimit:250 /proc/sys/net/ipv4/icmp_ratemask:6168 /proc/sys/net/ipv4/igmp_max_memberships:20 /proc/sys/net/ipv4/igmp_max_msf:10 /proc/sys/net/ipv4/inet_peer_gc_maxtime:120 /proc/sys/net/ipv4/inet_peer_gc_mintime:10 /proc/sys/net/ipv4/inet_peer_maxttl:600 /proc/sys/net/ipv4/inet_peer_minttl:120 /proc/sys/net/ipv4/inet_peer_threshold:65664 /proc/sys/net/ipv4/ip_default_ttl:64 /proc/sys/net/ipv4/ip_dynaddr:0 /proc/sys/net/ipv4/ip_forward:0 /proc/sys/net/ipv4/ipfrag_high_thresh:262144 /proc/sys/net/ipv4/ipfrag_low_thresh:196608 /proc/sys/net/ipv4/ipfrag_max_dist:64 /proc/sys/net/ipv4/ipfrag_secret_interval:600 /proc/sys/net/ipv4/ipfrag_time:30 /proc/sys/net/ipv4/ip_local_port_range:32768 61000 /proc/sys/net/ipv4/ip_nonlocal_bind:0 /proc/sys/net/ipv4/ip_no_pmtu_disc:0 /proc/sys/net/ipv4/tcp_abc:0 /proc/sys/net/ipv4/tcp_abort_on_overflow:0 /proc/sys/net/ipv4/tcp_adv_win_scale:2 /proc/sys/net/ipv4/tcp_app_win:31 /proc/sys/net/ipv4/tcp_base_mss:512 /proc/sys/net/ipv4/tcp_congestion_control:reno /proc/sys/net/ipv4/tcp_dma_copybreak:4096 /proc/sys/net/ipv4/tcp_dsack:1 /proc/sys/net/ipv4/tcp_ecn:0 /proc/sys/net/ipv4/tcp_fack:1 /proc/sys/net/ipv4/tcp_fin_timeout:60 /proc/sys/net/ipv4/tcp_frto:0 /proc/sys/net/ipv4/tcp_keepalive_intvl:75 /proc/sys/net/ipv4/tcp_keepalive_probes:9 /proc/sys/net/ipv4/tcp_keepalive_time:7200 /proc/sys/net/ipv4/tcp_low_latency:0 /proc/sys/net/ipv4/tcp_max_orphans:32768 /proc/sys/net/ipv4/tcp_max_syn_backlog:1024 /proc/sys/net/ipv4/tcp_max_tw_buckets:180000 /proc/sys/net/ipv4/tcp_mem:98304 131072 196608 /proc/sys/net/ipv4/tcp_moderate_rcvbuf:1 /proc/sys/net/ipv4/tcp_mtu_probing:0 /proc/sys/net/ipv4/tcp_no_metrics_save:0 /proc/sys/net/ipv4/tcp_orphan_retries:0 /proc/sys/net/ipv4/tcp_reordering:3 /proc/sys/net/ipv4/tcp_retrans_collapse:1 /proc/sys/net/ipv4/tcp_retries1:3 /proc/sys/net/ipv4/tcp_retries2:15 /proc/sys/net/ipv4/tcp_rfc1337:0 /proc/sys/net/ipv4/tcp_rmem:4096 87380 4194304 /proc/sys/net/ipv4/tcp_sack:1 /proc/sys/net/ipv4/tcp_slow_start_after_idle:1 /proc/sys/net/ipv4/tcp_stdurg:0 /proc/sys/net/ipv4/tcp_synack_retries:5 /proc/sys/net/ipv4/tcp_syncookies:1 /proc/sys/net/ipv4/tcp_syn_retries:5 /proc/sys/net/ipv4/tcp_timestamps:1 /proc/sys/net/ipv4/tcp_tso_win_divisor:3 /proc/sys/net/ipv4/tcp_tw_recycle:0 /proc/sys/net/ipv4/tcp_tw_reuse:0 /proc/sys/net/ipv4/tcp_window_scaling:1 /proc/sys/net/ipv4/tcp_wmem:4096 16384 4194304 /proc/sys/net/ipv4/tcp_workaround_signed_windows:0 -- -- Howard Chu Chief Architect, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc Chief Architect, OpenLDAP http://www.openldap.org/project/ ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 20:28 ` Howard Chu @ 2007-03-06 20:39 ` Eric Dumazet 2007-03-06 21:05 ` Howard Chu 0 siblings, 1 reply; 20+ messages in thread From: Eric Dumazet @ 2007-03-06 20:39 UTC (permalink / raw) To: Howard Chu; +Cc: netdev Howard Chu a écrit : > Eric Dumazet wrote: >> Let me see, any chance you can try the prog on 2.6.20 ? > > Not any time soon. >> >> If not, please send : >> >> grep . /proc/sys/net/ipv4/* > > This is the output on the laptop: > /proc/sys/net/ipv4/icmp_echo_ignore_all:0 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts:1 > /proc/sys/net/ipv4/icmp_errors_use_inbound_ifaddr:0 > /proc/sys/net/ipv4/icmp_ignore_bogus_error_responses:1 > /proc/sys/net/ipv4/icmp_ratelimit:250 > /proc/sys/net/ipv4/icmp_ratemask:6168 > /proc/sys/net/ipv4/igmp_max_memberships:20 > /proc/sys/net/ipv4/igmp_max_msf:10 > /proc/sys/net/ipv4/inet_peer_gc_maxtime:120 > /proc/sys/net/ipv4/inet_peer_gc_mintime:10 > /proc/sys/net/ipv4/inet_peer_maxttl:600 > /proc/sys/net/ipv4/inet_peer_minttl:120 > /proc/sys/net/ipv4/inet_peer_threshold:65664 > /proc/sys/net/ipv4/ip_default_ttl:64 > /proc/sys/net/ipv4/ip_dynaddr:0 > /proc/sys/net/ipv4/ip_forward:0 > /proc/sys/net/ipv4/ipfrag_high_thresh:262144 > /proc/sys/net/ipv4/ipfrag_low_thresh:196608 > /proc/sys/net/ipv4/ipfrag_max_dist:64 > /proc/sys/net/ipv4/ipfrag_secret_interval:600 > /proc/sys/net/ipv4/ipfrag_time:30 > /proc/sys/net/ipv4/ip_local_port_range:32768 61000 > /proc/sys/net/ipv4/ip_nonlocal_bind:0 > /proc/sys/net/ipv4/ip_no_pmtu_disc:0 > /proc/sys/net/ipv4/tcp_abc:0 > /proc/sys/net/ipv4/tcp_abort_on_overflow:0 > /proc/sys/net/ipv4/tcp_adv_win_scale:2 > /proc/sys/net/ipv4/tcp_app_win:31 > /proc/sys/net/ipv4/tcp_base_mss:512 > /proc/sys/net/ipv4/tcp_congestion_control:reno > /proc/sys/net/ipv4/tcp_dma_copybreak:4096 > /proc/sys/net/ipv4/tcp_dsack:1 > /proc/sys/net/ipv4/tcp_ecn:0 > /proc/sys/net/ipv4/tcp_fack:1 > /proc/sys/net/ipv4/tcp_fin_timeout:60 > /proc/sys/net/ipv4/tcp_frto:0 > /proc/sys/net/ipv4/tcp_keepalive_intvl:75 > /proc/sys/net/ipv4/tcp_keepalive_probes:9 > /proc/sys/net/ipv4/tcp_keepalive_time:7200 > /proc/sys/net/ipv4/tcp_low_latency:0 > /proc/sys/net/ipv4/tcp_max_orphans:32768 > /proc/sys/net/ipv4/tcp_max_syn_backlog:1024 > /proc/sys/net/ipv4/tcp_max_tw_buckets:180000 > /proc/sys/net/ipv4/tcp_mem:98304 131072 196608 > /proc/sys/net/ipv4/tcp_moderate_rcvbuf:1 > /proc/sys/net/ipv4/tcp_mtu_probing:0 > /proc/sys/net/ipv4/tcp_no_metrics_save:0 > /proc/sys/net/ipv4/tcp_orphan_retries:0 > /proc/sys/net/ipv4/tcp_reordering:3 > /proc/sys/net/ipv4/tcp_retrans_collapse:1 > /proc/sys/net/ipv4/tcp_retries1:3 > /proc/sys/net/ipv4/tcp_retries2:15 > /proc/sys/net/ipv4/tcp_rfc1337:0 > /proc/sys/net/ipv4/tcp_rmem:4096 87380 4194304 > /proc/sys/net/ipv4/tcp_sack:1 > /proc/sys/net/ipv4/tcp_slow_start_after_idle:1 > /proc/sys/net/ipv4/tcp_stdurg:0 > /proc/sys/net/ipv4/tcp_synack_retries:5 > /proc/sys/net/ipv4/tcp_syncookies:1 > /proc/sys/net/ipv4/tcp_syn_retries:5 > /proc/sys/net/ipv4/tcp_timestamps:1 > /proc/sys/net/ipv4/tcp_tso_win_divisor:3 > /proc/sys/net/ipv4/tcp_tw_recycle:0 > /proc/sys/net/ipv4/tcp_tw_reuse:0 > /proc/sys/net/ipv4/tcp_window_scaling:1 > /proc/sys/net/ipv4/tcp_wmem:4096 16384 4194304 > /proc/sys/net/ipv4/tcp_workaround_signed_windows:0 Arf... dont tell me you forgot to do this... echo 1 >/proc/sys/net/ipv4/tcp_tw_recycle echo 1 >/proc/sys/net/ipv4/tcp_tw_reuse ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 20:39 ` Eric Dumazet @ 2007-03-06 21:05 ` Howard Chu 2007-03-06 21:25 ` Rick Jones 0 siblings, 1 reply; 20+ messages in thread From: Howard Chu @ 2007-03-06 21:05 UTC (permalink / raw) To: Eric Dumazet; +Cc: netdev Eric Dumazet wrote: > Arf... dont tell me you forgot to do this... > > echo 1 >/proc/sys/net/ipv4/tcp_tw_recycle > echo 1 >/proc/sys/net/ipv4/tcp_tw_reuse That does not appear to me to be a safe thing to do on a production machine. Tweaks that are only good in a test environment really don't help the testing effort; they just mask a problem that will surface later at deployment time. We could run our benchmarks this way and get high rates but no one deploying the server for real use would ever get anything like that, which makes the benchmark figure rather pointless. On the other hand, being able to configure a small MSL for the loopback device is perfectly safe. Being able to configure a small MSL for other interfaces may be safe, depending on the rest of the network layout. -- -- Howard Chu Chief Architect, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc Chief Architect, OpenLDAP http://www.openldap.org/project/ ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 21:05 ` Howard Chu @ 2007-03-06 21:25 ` Rick Jones 2007-03-06 21:35 ` David Miller 0 siblings, 1 reply; 20+ messages in thread From: Rick Jones @ 2007-03-06 21:25 UTC (permalink / raw) To: Howard Chu; +Cc: Eric Dumazet, netdev > On the other hand, being able to configure a small MSL for the loopback > device is perfectly safe. Being able to configure a small MSL for other > interfaces may be safe, depending on the rest of the network layout. A peanut gallery question - I seem to recall prior discussions about how one cannot assume that a packet destined for a given IP address will remain detined for that given IP address as it could go through a module that will rewrite headers etc. Is traffic destined for 127.0.0.1 immune from that? rick jones ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 21:25 ` Rick Jones @ 2007-03-06 21:35 ` David Miller 2007-03-06 22:07 ` Howard Chu 0 siblings, 1 reply; 20+ messages in thread From: David Miller @ 2007-03-06 21:35 UTC (permalink / raw) To: rick.jones2; +Cc: hyc, dada1, netdev From: Rick Jones <rick.jones2@hp.com> Date: Tue, 06 Mar 2007 13:25:35 -0800 > > On the other hand, being able to configure a small MSL for the loopback > > device is perfectly safe. Being able to configure a small MSL for other > > interfaces may be safe, depending on the rest of the network layout. > > A peanut gallery question - I seem to recall prior discussions about how > one cannot assume that a packet destined for a given IP address will > remain detined for that given IP address as it could go through a module > that will rewrite headers etc. That's right, both netfilter and the packet scheduler actions can do that, that's why this whole idea about changing the MSL on loopback by default is wrong and pointless. ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 21:35 ` David Miller @ 2007-03-06 22:07 ` Howard Chu 2007-03-06 22:54 ` Stephen Hemminger 0 siblings, 1 reply; 20+ messages in thread From: Howard Chu @ 2007-03-06 22:07 UTC (permalink / raw) To: David Miller; +Cc: rick.jones2, dada1, netdev David Miller wrote: > From: Rick Jones <rick.jones2@hp.com> > Date: Tue, 06 Mar 2007 13:25:35 -0800 > >>> On the other hand, being able to configure a small MSL for the loopback >>> device is perfectly safe. Being able to configure a small MSL for other >>> interfaces may be safe, depending on the rest of the network layout. >> A peanut gallery question - I seem to recall prior discussions about how >> one cannot assume that a packet destined for a given IP address will >> remain detined for that given IP address as it could go through a module >> that will rewrite headers etc. > > That's right, both netfilter and the packet scheduler actions > can do that, that's why this whole idea about changing the MSL > on loopback by default is wrong and pointless. If the headers get rewritten and the packet gets directed elsewhere, then we're no longer talking about a loopback connection, so that's outside the discussion. If the packet gets munged by multiple filters but still eventually gets to the specified destination, OK. But regardless, if both endpoints of the connection are on the loopback device, then there is nothing wrong with the idea. Those filters can only do so much, they still have to preserve the reliable in-order delivery semantics of TCP, otherwise the system is broken. It may not have much use, sure, I admitted that much from the outset. So I'll leave it at this, thanks for the feedback. -- -- Howard Chu Chief Architect, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc Chief Architect, OpenLDAP http://www.openldap.org/project/ ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 22:07 ` Howard Chu @ 2007-03-06 22:54 ` Stephen Hemminger 2007-03-06 23:22 ` Howard Chu 0 siblings, 1 reply; 20+ messages in thread From: Stephen Hemminger @ 2007-03-06 22:54 UTC (permalink / raw) To: Howard Chu; +Cc: David Miller, rick.jones2, dada1, netdev On Tue, 06 Mar 2007 14:07:09 -0800 Howard Chu <hyc@symas.com> wrote: > David Miller wrote: > > From: Rick Jones <rick.jones2@hp.com> > > Date: Tue, 06 Mar 2007 13:25:35 -0800 > > > >>> On the other hand, being able to configure a small MSL for the loopback > >>> device is perfectly safe. Being able to configure a small MSL for other > >>> interfaces may be safe, depending on the rest of the network layout. > >> A peanut gallery question - I seem to recall prior discussions about how > >> one cannot assume that a packet destined for a given IP address will > >> remain detined for that given IP address as it could go through a module > >> that will rewrite headers etc. > > > > That's right, both netfilter and the packet scheduler actions > > can do that, that's why this whole idea about changing the MSL > > on loopback by default is wrong and pointless. > > If the headers get rewritten and the packet gets directed elsewhere, > then we're no longer talking about a loopback connection, so that's > outside the discussion. > > If the packet gets munged by multiple filters but still eventually gets > to the specified destination, OK. But regardless, if both endpoints of > the connection are on the loopback device, then there is nothing wrong > with the idea. Those filters can only do so much, they still have to > preserve the reliable in-order delivery semantics of TCP, otherwise the > system is broken. > > It may not have much use, sure, I admitted that much from the outset. > > So I'll leave it at this, thanks for the feedback. TCP can not assume anything about the path that a packet may take. We have declared a moratorium on loopback benchmark foolishness. Go optimize the idle loop instead ;-) -- Stephen Hemminger <shemminger@linux-foundation.org> ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 22:54 ` Stephen Hemminger @ 2007-03-06 23:22 ` Howard Chu 0 siblings, 0 replies; 20+ messages in thread From: Howard Chu @ 2007-03-06 23:22 UTC (permalink / raw) To: Stephen Hemminger; +Cc: David Miller, rick.jones2, dada1, netdev Stephen Hemminger wrote: > TCP can not assume anything about the path that a packet may take. > We have declared a moratorium on loopback benchmark foolishness. > Go optimize the idle loop instead ;-) Sure - A delay loop with fewer instructions is a worthwhile optimization because it has less impact on a CPU's instruction cache... -- -- Howard Chu Chief Architect, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc Chief Architect, OpenLDAP http://www.openldap.org/project/ ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 9:22 ` TCP 2MSL on loopback Howard Chu 2007-03-06 10:42 ` Eric Dumazet @ 2007-03-06 18:04 ` David Miller 2007-03-06 18:46 ` Rick Jones 2 siblings, 0 replies; 20+ messages in thread From: David Miller @ 2007-03-06 18:04 UTC (permalink / raw) To: hyc; +Cc: dada1, netdev From: Howard Chu <hyc@symas.com> Date: Tue, 06 Mar 2007 01:22:18 -0800 > OK, I just subscribed to netdev... Unlike other mailing lists you don't have to subscribe to netdev in order to post to it and ask questions :-) ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 9:22 ` TCP 2MSL on loopback Howard Chu 2007-03-06 10:42 ` Eric Dumazet 2007-03-06 18:04 ` David Miller @ 2007-03-06 18:46 ` Rick Jones 2007-03-06 19:25 ` Howard Chu 2 siblings, 1 reply; 20+ messages in thread From: Rick Jones @ 2007-03-06 18:46 UTC (permalink / raw) To: Howard Chu; +Cc: Eric Dumazet, netdev > This is probably not something that happens in real world deployments. I > But it's not 60,000 concurrent connections, it's 60,000 within a 2 > minute span. Sounds like a case of Doctor! Doctor! It hurts when I do this. > > I'm not saying this is a high priority problem, I only encountered it in > a test scenario where I was deliberately trying to max out the server. > >>> Ideally the 2MSL parameter would be dynamically adjusted based on the >>> route to the destination and the weights associated with those routes. >>> In the simplest case, connections between machines on the same subnet >>> (i.e., no router hops involved) should have a much smaller default value >>> than connections that traverse any routers. I'd settle for a two-level >>> setting - with no router hops, use the small value; with any router hops >>> use the large value. With transparant bridging, nobody knows how long the datagram may be out there. Admittedly, the chances of a datagram living for a full two minutes these days is probably nil, but just being in the same IP subnet doesn't really mean anything when it comes to physical locality. > It's a combination of 2MSL and /proc/sys/net/ipv4/ip_local_port_range - > on my system the default port range is 32768-61000. That means if I use > up 28232 ports in less than 2MSL then everything stops. netstat will > show that all the available port numbers are in TIME_WAIT state. And > this is particularly bad because while waiting for the timeout, I can't > initiate any new outbound connections of any kind at all - telnet, ssh, > whatever, you have to wait for at least one port to free up. > (Interesting denial of service there....) SPECweb benchmarking has had to deal with the issue of attempted TIME_WAIT reuse going back to 1997. It deals with it by not relying on the client's configured local/anonymous/ephemeral port number range and instead making explicit bind() calls in the (more or less) entire unpriv port range (actually it may just be from 5000 to 65535 but still) Now, if it weren't necessary to fully randomize the ISNs, the chances of a successful transition from TIME_WAIT to ESTABLISHED might be greater, but going back to the good old days of more or less purly clock driven ISN's isn't likely. rick jones ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 18:46 ` Rick Jones @ 2007-03-06 19:25 ` Howard Chu 2007-03-06 20:41 ` Rick Jones 0 siblings, 1 reply; 20+ messages in thread From: Howard Chu @ 2007-03-06 19:25 UTC (permalink / raw) To: Rick Jones; +Cc: Eric Dumazet, netdev Rick Jones wrote: >> This is probably not something that happens in real world deployments. >> I But it's not 60,000 concurrent connections, it's 60,000 within a 2 >> minute span. > > Sounds like a case of Doctor! Doctor! It hurts when I do this. I guess. In the cases where it matters, we use LDAP over Unix Domain Sockets instead of TCP. Smarter clients that do connection pooling would help too, but the fact that this even came to our attention is because not all clients out there are smart enough. Since we have an alternative that works, I'm not really worried about it. I just thought it was worthwhile to raise the question. >> I'm not saying this is a high priority problem, I only encountered it >> in a test scenario where I was deliberately trying to max out the server. >> >>>> Ideally the 2MSL parameter would be dynamically adjusted based on the >>>> route to the destination and the weights associated with those routes. >>>> In the simplest case, connections between machines on the same subnet >>>> (i.e., no router hops involved) should have a much smaller default >>>> value >>>> than connections that traverse any routers. I'd settle for a two-level >>>> setting - with no router hops, use the small value; with any router >>>> hops >>>> use the large value. > > With transparant bridging, nobody knows how long the datagram may be out > there. Admittedly, the chances of a datagram living for a full two > minutes these days is probably nil, but just being in the same IP subnet > doesn't really mean anything when it comes to physical locality. Bridging isn't necessarily a problem though. The 2MSL timeout is designed to prevent problems from delayed packets that got sent through multiple paths. In a bridging setup you don't allow multiple paths, that's what STP is designed to prevent. If you want to configure a network that allows multiple paths, you need to use a router, not a bridge. > SPECweb benchmarking has had to deal with the issue of attempted > TIME_WAIT reuse going back to 1997. It deals with it by not relying on > the client's configured local/anonymous/ephemeral port number range and > instead making explicit bind() calls in the (more or less) entire unpriv > port range (actually it may just be from 5000 to 65535 but still) That still doesn't solve the problem, it only ~doubles the available port range. That means it takes 0.6 seconds to trigger the problem instead of only 0.3 seconds... > Now, if it weren't necessary to fully randomize the ISNs, the chances of > a successful transition from TIME_WAIT to ESTABLISHED might be greater, > but going back to the good old days of more or less purly clock driven > ISN's isn't likely. In an environment where connections are opened and closed very quickly with only a small amount of data carried per connection, it might make sense to remember the last sequence number used on a port and use that as the floor of the next randomly generated ISN. Monotonically increasing sequence numbers aren't a security risk if there's still a randomly determined gap from one connection to the next. But I don't think it's necessary to consider this at the moment. -- -- Howard Chu Chief Architect, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc Chief Architect, OpenLDAP http://www.openldap.org/project/ ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 19:25 ` Howard Chu @ 2007-03-06 20:41 ` Rick Jones 2007-03-07 3:36 ` Howard Chu 0 siblings, 1 reply; 20+ messages in thread From: Rick Jones @ 2007-03-06 20:41 UTC (permalink / raw) To: Howard Chu; +Cc: Eric Dumazet, netdev >> With transparant bridging, nobody knows how long the datagram may be >> out there. Admittedly, the chances of a datagram living for a full >> two minutes these days is probably nil, but just being in the same IP >> subnet doesn't really mean anything when it comes to physical locality. > > > Bridging isn't necessarily a problem though. The 2MSL timeout is > designed to prevent problems from delayed packets that got sent through > multiple paths. In a bridging setup you don't allow multiple paths, > that's what STP is designed to prevent. If you want to configure a > network that allows multiple paths, you need to use a router, not a bridge. Well, there is trunking at the data link layer, and in theory there could be an active-standby where the standby took a somewhat different path. The timeout is also to cover datagrams which just got "stuck" somewhere too (IIRC) and may not necessarily require a multiple path situation. > >> SPECweb benchmarking has had to deal with the issue of attempted >> TIME_WAIT reuse going back to 1997. It deals with it by not relying >> on the client's configured local/anonymous/ephemeral port number range >> and instead making explicit bind() calls in the (more or less) entire >> unpriv port range (actually it may just be from 5000 to 65535 but still) > > > That still doesn't solve the problem, it only ~doubles the available > port range. That means it takes 0.6 seconds to trigger the problem > instead of only 0.3 seconds... True. Thankfully, the web learned to use persistent connections so later versions of SPECweb benchmarking make use of persistent connections. > In an environment where connections are opened and closed very quickly > with only a small amount of data carried per connection, it might make > sense to remember the last sequence number used on a port and use that > as the floor of the next randomly generated ISN. Monotonically > increasing sequence numbers aren't a security risk if there's still a > randomly determined gap from one connection to the next. But I don't > think it's necessary to consider this at the moment. I thought that all the "security types" started squawking if the ISN wasn't completely random? I've not tried this, but if a client does want to cycle through thousands of connections per second, and if it is the one to initiate connection close, would it be sufficient to only use something like: socket() bind() loop: connect() request() response() shudtown(SHUT_RDWR) goto loop ie not call close on the FD so there is still a direct link to the connection in TIME_WAIT so one could in theory initiate a new connection from TIME_WAIT? Then in theory the randomness could be _almost_ the entire sequence space, less the previous connection's window (IIRC). rick jones rick jones ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: TCP 2MSL on loopback 2007-03-06 20:41 ` Rick Jones @ 2007-03-07 3:36 ` Howard Chu 0 siblings, 0 replies; 20+ messages in thread From: Howard Chu @ 2007-03-07 3:36 UTC (permalink / raw) To: Rick Jones; +Cc: Eric Dumazet, netdev Rick Jones wrote: > The timeout is also to cover datagrams which just got "stuck" somewhere > too (IIRC) and may not necessarily require a multiple path situation. I guess that's a fair point. Originally, the only possible place for a packet to get "stuck" was in a router but I suppose that may no longer be true. > True. Thankfully, the web learned to use persistent connections so > later versions of SPECweb benchmarking make use of persistent connections. As a complete aside, I think it's about time for a SPECldap benchmark... -- -- Howard Chu Chief Architect, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc Chief Architect, OpenLDAP http://www.openldap.org/project/ ^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2007-03-07 3:42 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <45EBFD13.1060106@symas.com>
2007-03-05 14:28 ` TCP 2MSL on loopback Eric Dumazet
2007-03-05 15:09 ` [PATCH] twcal_jiffie should be unsigned long, not int Eric Dumazet
2007-03-05 21:33 ` David Miller
2007-03-06 9:22 ` TCP 2MSL on loopback Howard Chu
2007-03-06 10:42 ` Eric Dumazet
2007-03-06 18:39 ` Howard Chu
2007-03-06 20:07 ` Eric Dumazet
2007-03-06 20:28 ` Howard Chu
2007-03-06 20:39 ` Eric Dumazet
2007-03-06 21:05 ` Howard Chu
2007-03-06 21:25 ` Rick Jones
2007-03-06 21:35 ` David Miller
2007-03-06 22:07 ` Howard Chu
2007-03-06 22:54 ` Stephen Hemminger
2007-03-06 23:22 ` Howard Chu
2007-03-06 18:04 ` David Miller
2007-03-06 18:46 ` Rick Jones
2007-03-06 19:25 ` Howard Chu
2007-03-06 20:41 ` Rick Jones
2007-03-07 3:36 ` Howard Chu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).