* V2.4 policy router operates faster/better than V2.6
@ 2005-01-03 20:55 Jeremy M. Guthrie
2005-01-03 22:51 ` Stephen Hemminger
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-03 20:55 UTC (permalink / raw)
To: netdev
[-- Attachment #1.1: Type: text/plain, Size: 3833 bytes --]
I have a dual processor box running Suse 9.1 Ent. that I changed over to the
V2.6.10 kernel. The box has two interfaces in it, both E1000s. The box
receives anywhere from 200mbit to 500+ mbit that it needs to route out to
other boxes. The policy routing table is running ~ 150-200 rules. ie. data
comes in E3(e1000), is policy routed to a destination sent out E2(e1000).
Under V2.4 kernels, the system will operate just fine and drop few packets if
any. ie. right now under V2.4, I have dropped all of three packets. Under
2.6, I can watch the RX drop counter increment. See below.
[h-pr-msn-1 guthrie 1:48pm]~-> ifconfig eth3 ; sleep 10 ; ifconfig eth3
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:132919934 errors:311285 dropped:311285 overruns:247225
frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2630721320 (2508.8 Mb) TX bytes:484 (484.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:133847068 errors:325697 dropped:325697 overruns:258546
frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3102796062 (2959.0 Mb) TX bytes:484 (484.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
If I turn off the policy routing, I instantly stop getting RX errors or
overruns as it appears the CPU can now pay attention to the packets coming in
and drop them(as I turned off IP forwarding as well).
V2.4 Kernel mpstat data:
command: mpstat -P ALL 60
Linux 2.4.21-251-smp (h-pr-msn-1) 12/15/2004
01:16:24 PM CPU %user %nice %system %idle intr/s
01:17:19 PM all 0.16 0.00 50.12 49.72 42114.18
01:17:19 PM 0 0.12 0.00 55.60 44.28 42114.18
01:17:19 PM 1 0.20 0.00 44.65 55.15 42114.18
01:17:19 PM CPU %user %nice %system %idle intr/s
01:18:19 PM all 0.13 0.00 48.49 51.38 42103.08
01:18:19 PM 0 0.13 0.00 31.88 67.98 42103.08
01:18:19 PM 1 0.13 0.00 65.10 34.77 42103.08
V2.6 kernel mpstat data:
command: mpstat -P ALL 60
Linux 2.6.5-7.111.5-smp (h-pr-msn-1) 12/15/04
13:36:25 CPU %user %nice %system %iowait %irq %soft %idle
intr/s
13:37:25 all 0.13 0.00 0.15 0.09 2.03 43.14 54.45
25506.53
13:37:25 0 0.17 0.00 0.08 0.18 0.00 16.81 82.76
2215.63
13:37:25 1 0.08 0.00 0.20 0.00 4.08 69.49 26.14
23291.34
13:37:25 CPU %user %nice %system %iowait %irq %soft %idle
intr/s
13:38:24 all 0.14 0.00 0.12 0.12 2.02 42.89 54.71
25900.70
13:38:24 0 0.03 0.00 0.05 0.22 0.00 16.67 83.03
2246.10
13:38:24 1 0.25 0.00 0.20 0.03 4.02 69.12 26.40
23654.55
Any insights as to why there would be such a stark difference in performance
between V2.6 and V2.4?
Please advise.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #1.2: 0x719905E5.asc --]
[-- Type: application/pgp-keys, Size: 1734 bytes --]
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-03 20:55 V2.4 policy router operates faster/better than V2.6 Jeremy M. Guthrie
@ 2005-01-03 22:51 ` Stephen Hemminger
2005-01-03 22:56 ` Jeremy M. Guthrie
2005-01-04 15:07 ` Jeremy M. Guthrie
0 siblings, 2 replies; 88+ messages in thread
From: Stephen Hemminger @ 2005-01-03 22:51 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev
On Mon, 3 Jan 2005 14:55:24 -0600
"Jeremy M. Guthrie" <jeremy.guthrie@berbee.com> wrote:
> I have a dual processor box running Suse 9.1 Ent. that I changed over to the
> V2.6.10 kernel. The box has two interfaces in it, both E1000s. The box
> receives anywhere from 200mbit to 500+ mbit that it needs to route out to
> other boxes. The policy routing table is running ~ 150-200 rules. ie. data
> comes in E3(e1000), is policy routed to a destination sent out E2(e1000).
>
> Under V2.4 kernels, the system will operate just fine and drop few packets if
> any. ie. right now under V2.4, I have dropped all of three packets. Under
> 2.6, I can watch the RX drop counter increment. See below.
>
> [h-pr-msn-1 guthrie 1:48pm]~-> ifconfig eth3 ; sleep 10 ; ifconfig eth3
> eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:132919934 errors:311285 dropped:311285 overruns:247225
> frame:0
> TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:2630721320 (2508.8 Mb) TX bytes:484 (484.0 b)
> Base address:0x22a0 Memory:eff80000-effa0000
>
> eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:133847068 errors:325697 dropped:325697 overruns:258546
> frame:0
> TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:3102796062 (2959.0 Mb) TX bytes:484 (484.0 b)
> Base address:0x22a0 Memory:eff80000-effa0000
>
> If I turn off the policy routing, I instantly stop getting RX errors or
> overruns as it appears the CPU can now pay attention to the packets coming in
> and drop them(as I turned off IP forwarding as well).
>
> V2.4 Kernel mpstat data:
> command: mpstat -P ALL 60
> Linux 2.4.21-251-smp (h-pr-msn-1) 12/15/2004
>
> 01:16:24 PM CPU %user %nice %system %idle intr/s
> 01:17:19 PM all 0.16 0.00 50.12 49.72 42114.18
> 01:17:19 PM 0 0.12 0.00 55.60 44.28 42114.18
> 01:17:19 PM 1 0.20 0.00 44.65 55.15 42114.18
>
> 01:17:19 PM CPU %user %nice %system %idle intr/s
> 01:18:19 PM all 0.13 0.00 48.49 51.38 42103.08
> 01:18:19 PM 0 0.13 0.00 31.88 67.98 42103.08
> 01:18:19 PM 1 0.13 0.00 65.10 34.77 42103.08
>
> V2.6 kernel mpstat data:
> command: mpstat -P ALL 60
> Linux 2.6.5-7.111.5-smp (h-pr-msn-1) 12/15/04
>
> 13:36:25 CPU %user %nice %system %iowait %irq %soft %idle
> intr/s
> 13:37:25 all 0.13 0.00 0.15 0.09 2.03 43.14 54.45
> 25506.53
> 13:37:25 0 0.17 0.00 0.08 0.18 0.00 16.81 82.76
> 2215.63
> 13:37:25 1 0.08 0.00 0.20 0.00 4.08 69.49 26.14
> 23291.34
>
> 13:37:25 CPU %user %nice %system %iowait %irq %soft %idle
> intr/s
> 13:38:24 all 0.14 0.00 0.12 0.12 2.02 42.89 54.71
> 25900.70
> 13:38:24 0 0.03 0.00 0.05 0.22 0.00 16.67 83.03
> 2246.10
> 13:38:24 1 0.25 0.00 0.20 0.03 4.02 69.12 26.40
> 23654.55
>
> Any insights as to why there would be such a stark difference in performance
> between V2.6 and V2.4?
How many flows are going through the router? The neighbour cache
can get to be a bottleneck. Perhaps Robert "the Router Man" Olssen can
give some hints.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-03 22:51 ` Stephen Hemminger
@ 2005-01-03 22:56 ` Jeremy M. Guthrie
2005-01-05 13:18 ` Robert Olsson
2005-01-04 15:07 ` Jeremy M. Guthrie
1 sibling, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-03 22:56 UTC (permalink / raw)
To: netdev; +Cc: Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 789 bytes --]
How would I check? It should be in the hundreds of thousands.
On Monday 03 January 2005 04:51 pm, Stephen Hemminger wrote:
> On Mon, 3 Jan 2005 14:55:24 -0600
>
> "Jeremy M. Guthrie" <jeremy.guthrie@berbee.com> wrote:
> > Any insights as to why there would be such a stark difference in
> > performance between V2.6 and V2.4?
>
> How many flows are going through the router? The neighbour cache
> can get to be a bottleneck. Perhaps Robert "the Router Man" Olssen can
> give some hints.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-03 22:51 ` Stephen Hemminger
2005-01-03 22:56 ` Jeremy M. Guthrie
@ 2005-01-04 15:07 ` Jeremy M. Guthrie
1 sibling, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-04 15:07 UTC (permalink / raw)
To: netdev
[-- Attachment #1: Type: text/plain, Size: 1022 bytes --]
On Monday 03 January 2005 04:51 pm, Stephen Hemminger wrote:
> How many flows are going through the router? The neighbour cache
> can get to be a bottleneck. Perhaps Robert "the Router Man" Olssen can
> give some hints.
Tue Jan 4 08:59:12 CST 2005
58406 rt_cache
Tue Jan 4 08:59:48 CST 2005
60636 rt_cache
Tue Jan 4 09:00:34 CST 2005
63891 rt_cache
Tue Jan 4 09:01:02 CST 2005
64635 rt_cache
Tue Jan 4 09:01:29 CST 2005
65689 rt_cache
Tue Jan 4 09:01:58 CST 2005
63426 rt_cache
Tue Jan 4 09:02:25 CST 2005
64139 rt_cache
Tue Jan 4 09:02:53 CST 2005
63860 rt_cache
Tue Jan 4 09:03:20 CST 2005
65719 rt_cache
Tue Jan 4 09:03:48 CST 2005
62465 rt_cache
Tue Jan 4 09:04:15 CST 2005
39339 rt_cache
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-03 22:56 ` Jeremy M. Guthrie
@ 2005-01-05 13:18 ` Robert Olsson
2005-01-05 15:18 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-05 13:18 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Stephen Hemminger
Jeremy M. Guthrie writes:
> How would I check? It should be in the hundreds of thousands.
Good question Stephen,..
Yes it seems like this pretty hefty load. Forwarding rate of 92k kpps
and a drop rate of 10 kpps and dst hash mostly at 50-60 kentries if
I read the stats correctly.
And 2.4 were able handle this but not 2.6.10?
Assuming things are uses and setup identically. 2.6 uses RCU for route hash
locking. Any dst cache overslow messages seen?
A couple of lines of rtstat would be very interesing from this box.
Also check that the CPU shares the RX packet load. CPU0 affinty to eth0
and CPU1 to eth1 seems to be best. It gives cache bouncing at "TX" and
slab jobs but we have accept that for now.
13:37:25 CPU %user %nice %system %iowait %irq %soft %idle
> intr/s
> 13:38:24 all 0.14 0.00 0.12 0.12 2.02 42.89 54.71
> 25900.70
> 13:38:24 0 0.03 0.00 0.05 0.22 0.00 16.67 83.03
> 2246.10
> 13:38:24 1 0.25 0.00 0.20 0.03 4.02 69.12 26.40
> 23654.55
This looks weird to me... we cannot have CPU left? Due to the imbalance?
Check /proc/net/softnet_stat,
Haven't used mpstat. %soft is that *all* softirq's or only softirq's deferred
to ksoftird only?
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-05 13:18 ` Robert Olsson
@ 2005-01-05 15:18 ` Jeremy M. Guthrie
2005-01-05 16:30 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-05 15:18 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 2632 bytes --]
On Wednesday 05 January 2005 07:18 am, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > How would I check? It should be in the hundreds of thousands.
>
> Good question Stephen,..
>
> Yes it seems like this pretty hefty load. Forwarding rate of 92k kpps
> and a drop rate of 10 kpps and dst hash mostly at 50-60 kentries if
> I read the stats correctly.
Yeah, the load will be high. I'm expecting this to be watching ~ 750 mbps by
next December. The app profiles all traffic going in and out of our data
centers.
> And 2.4 were able handle this but not 2.6.10?
Yes. It does handle it. It runs harder ie. 2.6 caps out at ~ 50% utilization
where 2.4 might run 60-75% utilized.
> Assuming things are uses and setup identically. 2.6 uses RCU for route
> hash locking. Any dst cache overslow messages seen?
No.
> A couple of lines of rtstat would be very interesing from this box.
I'm not showing the /proc/net/rt_cache_stat file. Was there a kernel option I
need to recompile with for rt_cache_stat to show up in proc?
> Also check that the CPU shares the RX packet load. CPU0 affinty to eth0
> and CPU1 to eth1 seems to be best. It gives cache bouncing at "TX" and
> slab jobs but we have accept that for now.
How would I go about doing this?
> 13:37:25 CPU %user %nice %system %iowait %irq %soft %idle
>
> > intr/s
> > 13:38:24 all 0.14 0.00 0.12 0.12 2.02 42.89 54.71
> > 25900.70
> > 13:38:24 0 0.03 0.00 0.05 0.22 0.00 16.67 83.03
> > 2246.10
> > 13:38:24 1 0.25 0.00 0.20 0.03 4.02 69.12 26.40
> > 23654.55
>
> This looks weird to me... we cannot have CPU left? Due to the imbalance?
> Check /proc/net/softnet_stat,
cat /proc/net/softnet_stat
5592c972 00000000 00001fc8 00000000 00000000 00000000 00000000 00000000
00391c3f
000f1991 00000000 00000000 00000000 00000000 00000000 00000000 00000000
001292ba
> Haven't used mpstat. %soft is that *all* softirq's or only softirq's
> deferred to ksoftird only?
"%soft"
Show the percentage of time spent by the CPU or CPUs to service
softirqs. A softirq (software interrupt) is one of up to 32
enumerated software interrupts which can run on multiple CPUs at
once.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-05 15:18 ` Jeremy M. Guthrie
@ 2005-01-05 16:30 ` Robert Olsson
2005-01-05 17:35 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-05 16:30 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> Yeah, the load will be high. I'm expecting this to be watching ~ 750 mbps by
> next December. The app profiles all traffic going in and out of our data
> centers.
BW itself or pps is that not much of challange as handling of concurrent
flows.
> I'm not showing the /proc/net/rt_cache_stat file. Was there a kernel
> option I need to recompile with for rt_cache_stat to show up in proc?
No it's there without any options. Would be nice to the output from rtstat
> > Also check that the CPU shares the RX packet load. CPU0 affinty to eth0
> > and CPU1 to eth1 seems to be best. It gives cache bouncing at "TX" and
> > slab jobs but we have accept that for now.
> How would I go about doing this?
Assume you route packets between eth0 <-> eth1
Set eth0 irq to CPU0 and eth1 to CPU1 with /proc/irq/XX/smp_affinity
Disable irqbalancer etc.
> cat /proc/net/softnet_stat
total droppped tsquz Throttl FR_hit FR_succe FR_defer FR_def_o
cpu_coll
> 5592c972 00000000 00001fc8 00000000 00000000 00000000 00000000 00000000
> 00391c3f
> 000f1991 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> 001292ba
See! One line per CPU. So CPU0 is handing almost all packets.
> "%soft"
> Show the percentage of time spent by the CPU or CPUs to service
> softirqs. A softirq (software interrupt) is one of up to 32
> enumerated software interrupts which can run on multiple CPUs
Well yes. I had a more specific question. I'll look into mpstat where do
find it? Kernel pacthes?
Be also aware that packet forwarding with SMP/NUMA is very much research
today it is not that easy or not even possible to get aggregated performance
from several CPU's. in any setup. Well anyway we are beginning to see some
benefits now as we better understand the problems.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-05 16:30 ` Robert Olsson
@ 2005-01-05 17:35 ` Jeremy M. Guthrie
2005-01-05 19:25 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-05 17:35 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 6120 bytes --]
On Wednesday 05 January 2005 10:30 am, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > Yeah, the load will be high. I'm expecting this to be watching ~ 750
> > mbps by next December. The app profiles all traffic going in and out of
> > our data centers.
> BW itself or pps is that not much of challange as handling of concurrent
> flows.
Roger that.
> > I'm not showing the /proc/net/rt_cache_stat file. Was there a kernel
> > option I need to recompile with for rt_cache_stat to show up in proc?
>
> No it's there without any options. Would be nice to the output from rtstat
Output from rtstat:
rtstat
fopen: No such file or directory
cd /proc/net/
ls -la
total 0
dr-xr-xr-x 5 root root 0 2005-01-04 08:53 .
dr-xr-xr-x 72 root root 0 2005-01-04 08:53 ..
-r--r--r-- 1 root root 0 2005-01-05 10:32 anycast6
-r--r--r-- 1 root root 0 2005-01-05 10:32 arp
dr-xr-xr-x 2 root root 0 2005-01-05 10:32 atm
-r--r--r-- 1 root root 0 2005-01-05 10:32 dev
-r--r--r-- 1 root root 0 2005-01-05 10:32 dev_mcast
dr-xr-xr-x 2 root root 0 2005-01-05 10:32 dev_snmp6
-r--r--r-- 1 root root 0 2005-01-04 08:53 if_inet6
-r--r--r-- 1 root root 0 2005-01-05 10:32 igmp
-r--r--r-- 1 root root 0 2005-01-05 10:32 igmp6
-r--r--r-- 1 root root 0 2005-01-05 10:32 ip6_flowlabel
-r--r--r-- 1 root root 0 2005-01-05 10:32 ip_mr_cache
-r--r--r-- 1 root root 0 2005-01-05 10:32 ip_mr_vif
-r--r--r-- 1 root root 0 2005-01-05 10:32 ip_tables_matches
-r--r--r-- 1 root root 0 2005-01-05 10:32 ip_tables_names
-r--r--r-- 1 root root 0 2005-01-05 10:32 ip_tables_targets
-r--r--r-- 1 root root 0 2005-01-05 10:32 ipv6_route
-r--r--r-- 1 root root 0 2005-01-05 10:32 mcfilter
-r--r--r-- 1 root root 0 2005-01-05 10:32 mcfilter6
-r--r--r-- 1 root root 0 2005-01-05 10:32 netlink
-r--r--r-- 1 root root 0 2005-01-05 10:32 netstat
-r--r--r-- 1 root root 0 2005-01-05 10:32 psched
-r--r--r-- 1 root root 0 2005-01-05 10:32 raw
-r--r--r-- 1 root root 0 2005-01-05 10:32 raw6
-r--r--r-- 1 root root 0 2005-01-05 10:32 route
dr-xr-xr-x 6 root root 0 2005-01-05 10:32 rpc
-r--r--r-- 1 root root 0 2005-01-05 10:32 rt6_stats
-r--r--r-- 1 root root 0 2005-01-05 10:32 rt_acct
-r--r--r-- 1 root root 0 2005-01-05 10:32 rt_cache
-r--r--r-- 1 root root 0 2005-01-05 10:32 snmp
-r--r--r-- 1 root root 0 2005-01-05 10:32 snmp6
-r--r--r-- 1 root root 0 2005-01-05 10:32 sockstat
-r--r--r-- 1 root root 0 2005-01-05 10:32 sockstat6
-r--r--r-- 1 root root 0 2005-01-05 10:32 softnet_stat
dr-xr-xr-x 2 root root 0 2005-01-05 10:32 stat
-r--r--r-- 1 root root 0 2005-01-04 08:53 tcp
-r--r--r-- 1 root root 0 2005-01-05 10:32 tcp6
-r--r--r-- 1 root root 0 2005-01-05 10:32 tr_rif
-r--r--r-- 1 root root 0 2005-01-04 08:53 udp
-r--r--r-- 1 root root 0 2005-01-05 10:32 udp6
-r--r--r-- 1 root root 0 2005-01-05 10:32 unix
-r--r--r-- 1 root root 0 2005-01-05 10:32 wireless
> > > Also check that the CPU shares the RX packet load. CPU0 affinty to
> > > eth0 and CPU1 to eth1 seems to be best. It gives cache bouncing at
> > > "TX" and slab jobs but we have accept that for now.
> >
> > How would I go about doing this?
>
> Assume you route packets between eth0 <-> eth1
Yup
> Set eth0 irq to CPU0 and eth1 to CPU1 with /proc/irq/XX/smp_affinity
Done
> Disable irqbalancer etc.
Done
I'll let you know what I see for stats once I get some collected.
> > cat /proc/net/softnet_stat
>
> total droppped tsquz Throttl FR_hit FR_succe FR_defer FR_def_o
> cpu_coll
>
> > 5592c972 00000000 00001fc8 00000000 00000000 00000000 00000000 00000000
> > 00391c3f
> > 000f1991 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> > 001292ba
>
> See! One line per CPU. So CPU0 is handing almost all packets.
cat /proc/interrupts
CPU0 CPU1
0: 674862 93484967 IO-APIC-edge timer
1: 564 9 IO-APIC-edge i8042
7: 0 0 IO-APIC-level ohci_hcd
8: 1 1 IO-APIC-edge rtc
12: 268 62 IO-APIC-edge i8042
14: 2 0 IO-APIC-edge ide0
18: 2105131410 9140835 IO-APIC-level eth3
20: 1077 248075156 IO-APIC-level eth2
27: 118224 1 IO-APIC-level eth0
28: 36298 49 IO-APIC-level aic7xxx
30: 0 0 IO-APIC-level acpi
NMI: 0 0
LOC: 94168097 94168094
ERR: 0
MIS: 0
> > "%soft"
> > Show the percentage of time spent by the CPU or CPUs to service
> > softirqs. A softirq (software interrupt) is one of up to 32
> > enumerated software interrupts which can run on multiple CPUs
>
> Well yes. I had a more specific question. I'll look into mpstat where do
> find it? Kernel pacthes?
Sorry about that. New to the list. I'm not suggesting anything. I
appreciate the help!
Suse listed mpstat as part of sysstat 5.1.2. I'm running stock 2.6.10.
> Be also aware that packet forwarding with SMP/NUMA is very much research
> today it is not that easy or not even possible to get aggregated
> performance from several CPU's. in any setup. Well anyway we are beginning
> to see some benefits now as we better understand the problems.
Understood. As long as I know this, I can articulate this to my uppers for
bigger hardware. My current system is a dual P-III 700mhz. May be time for
an upgrade. However, I figure this may also offer a good environment to help
provide you guys with a taxed system running an load of flows. Nothing like
finding fun stuff while a system is ready to fall over.
Would a single hyper threaded CPU help this or should I default to a normal
dual-cpu system?
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-05 17:35 ` Jeremy M. Guthrie
@ 2005-01-05 19:25 ` Jeremy M. Guthrie
2005-01-05 20:22 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-05 19:25 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 927 bytes --]
After smp_affinity adjustments and turning off IRQ balancing.
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:537271635 errors:2377048 dropped:2377048 overruns:1849169
frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1804727106 (1721.1 Mb) TX bytes:398 (398.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-05 19:25 ` Jeremy M. Guthrie
@ 2005-01-05 20:22 ` Robert Olsson
2005-01-05 20:52 ` Jeremy M. Guthrie
2005-01-06 15:26 ` Jeremy M. Guthrie
0 siblings, 2 replies; 88+ messages in thread
From: Robert Olsson @ 2005-01-05 20:22 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> After smp_affinity adjustments and turning off IRQ balancing.
>
> eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> RX packets:537271635 errors:2377048 dropped:2377048 overruns:1849169
Worse?
Yes. I'll remember Hararld moved rt_cache_stat /proc/net/stats/ and wrote a
new utility for it. Should be in iproute2 package. Stephen knows the
details. Otherwise change path in rtstat. You need this to relate and
verify the load.
Check throughtput and how packet load is used with 2.4. /proc/net/softnet_stat
and rtstat. Also meditate how the comparison can be fair wrt taffic patterns.
Do smame with 2.6.
Prepare for a oprofile. I'm lazy and compile the stuff I use into kernel if
you can get the same result w. nodules it's ok.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-05 20:22 ` Robert Olsson
@ 2005-01-05 20:52 ` Jeremy M. Guthrie
2005-01-06 15:26 ` Jeremy M. Guthrie
1 sibling, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-05 20:52 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 1881 bytes --]
On Wednesday 05 January 2005 02:22 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > After smp_affinity adjustments and turning off IRQ balancing.
> >
> > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> >
> > RX packets:537271635 errors:2377048 dropped:2377048
> > overruns:1849169
>
> Worse?
>
> Yes. I'll remember Hararld moved rt_cache_stat /proc/net/stats/ and wrote
> a new utility for it. Should be in iproute2 package. Stephen knows the
> details. Otherwise change path in rtstat. You need this to relate and
> verify the load.
Hm...
ls -la /proc/net/stat/
total 0
dr-xr-xr-x 2 root root 0 Jan 5 14:47 .
dr-xr-xr-x 5 root root 0 Jan 5 11:50 ..
-r--r--r-- 1 root root 0 Jan 5 14:47 arp_cache
-r--r--r-- 1 root root 0 Jan 5 14:47 clip_arp_cache
-r--r--r-- 1 root root 0 Jan 5 14:47 ndisc_cache
-r--r--r-- 1 root root 0 Jan 5 14:47 rt_cache
> Check throughtput and how packet load is used with 2.4.
> /proc/net/softnet_stat and rtstat.
Will do.
> Also meditate how the comparison can be
> fair wrt taffic patterns. Do smame with 2.6.
Not sure I follow. I can see higher pps/byte through puts on switch port
counters when I run with 2.4 vs 2.6. I'll double check though. I am the
'dev' state so I can swap between V2.4 and V2.6 as necessary to compare
during roughly equivalent times. The only delay is the time between reboots.
> Prepare for a oprofile. I'm lazy and compile the stuff I use into kernel
> if you can get the same result w. nodules it's ok.
Okay.
>
>
> --ro
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-05 20:22 ` Robert Olsson
2005-01-05 20:52 ` Jeremy M. Guthrie
@ 2005-01-06 15:26 ` Jeremy M. Guthrie
2005-01-06 18:15 ` Robert Olsson
1 sibling, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-06 15:26 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 3954 bytes --]
On Wednesday 05 January 2005 02:22 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > After smp_affinity adjustments and turning off IRQ balancing.
> >
> > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> >
> > RX packets:537271635 errors:2377048 dropped:2377048
> > overruns:1849169
>
> Worse?
>
> Yes. I'll remember Hararld moved rt_cache_stat /proc/net/stats/ and wrote
> a new utility for it. Should be in iproute2 package. Stephen knows the
> details. Otherwise change path in rtstat. You need this to relate and
> verify the load.
I still don't see the rt_cache_stat file even under 2.4.28 stock. See below.
> Check throughtput and how packet load is used with 2.4.
> /proc/net/softnet_stat and rtstat. Also meditate how the comparison can be
> fair wrt taffic patterns. Do smame with 2.6.
wc -l /proc/net/rt_cache
Segmentation fault
cat /proc/net/rt_cache > /tmp/rt_cache ; wc -l /tmp/rt_cache
cat: write error: Bad address
57664 /tmp/rt_cache
ls -la /proc/net/
total 0
dr-xr-xr-x 6 root root 0 Jan 6 09:22 .
dr-xr-xr-x 55 root root 0 Jan 5 15:23 ..
-r--r--r-- 1 root root 0 Jan 6 09:22 arp
dr-xr-xr-x 2 root root 0 Jan 6 09:22 atm
-r--r--r-- 1 root root 0 Jan 6 09:22 dev
-r--r--r-- 1 root root 0 Jan 6 09:22 dev_mcast
dr-xr-xr-x 2 root root 0 Jan 6 09:22 drivers
-r--r--r-- 1 root root 0 Jan 6 09:22 igmp
-r--r--r-- 1 root root 0 Jan 6 09:22 ip_mr_cache
-r--r--r-- 1 root root 0 Jan 6 09:22 ip_mr_vif
-r--r--r-- 1 root root 0 Jan 6 09:22 ip_queue
-r--r--r-- 1 root root 0 Jan 6 09:22 ip_tables_matches
-r--r--r-- 1 root root 0 Jan 6 09:22 ip_tables_names
-r--r--r-- 1 root root 0 Jan 6 09:22 ip_tables_targets
-r--r--r-- 1 root root 0 Jan 6 09:22 mcfilter
-r--r--r-- 1 root root 0 Jan 6 09:22 netlink
-r--r--r-- 1 root root 0 Jan 6 09:22 netstat
-r--r--r-- 1 root root 0 Jan 6 09:22 pnp
-r--r--r-- 1 root root 0 Jan 6 09:22 psched
-r--r--r-- 1 root root 0 Jan 6 09:22 raw
-r--r--r-- 1 root root 0 Jan 6 09:22 route
dr-xr-xr-x 2 root root 0 Jan 6 09:22 rpc
-r--r--r-- 1 root root 0 Jan 6 09:22 rt_acct
-r--r--r-- 1 root root 0 Jan 6 09:22 rt_cache
-r--r--r-- 1 root root 0 Jan 6 09:22 snmp
-r--r--r-- 1 root root 0 Jan 6 09:22 sockstat
-r--r--r-- 1 root root 0 Jan 6 09:22 softnet_stat
dr-xr-xr-x 2 root root 0 Jan 6 09:22 stat
-r--r--r-- 1 root root 0 Jan 6 09:22 tcp
-r--r--r-- 1 root root 0 Jan 6 09:22 udp
-r--r--r-- 1 root root 0 Jan 6 09:22 unix
-r--r--r-- 1 root root 0 Jan 6 09:22 wireless
ls -la /proc/net/stat/
total 0
dr-xr-xr-x 2 root root 0 Jan 6 09:22 .
dr-xr-xr-x 6 root root 0 Jan 6 09:22 ..
-r--r--r-- 1 root root 0 Jan 6 09:22 arp_cache
-r--r--r-- 1 root root 0 Jan 6 09:22 clip_arp_cache
-r--r--r-- 1 root root 0 Jan 6 09:22 rt_cache
cat /proc/net/softnet_stat
96deb140 0032844e 00012bbb 00000fd8 00000000 00000000 00000000 00000000
00000dd4
00013dda 00000000 00000000 00000000 00000000 00000000 00000000 00000000
0000041f
cat /proc/interrupts
CPU0 CPU1
0: 821852 5664906 IO-APIC-edge timer
1: 484 1247 IO-APIC-edge keyboard
8: 0 2 IO-APIC-edge rtc
14: 5 16 IO-APIC-edge ide0
18: 1928608529 1374 IO-APIC-level eth3
20: 1 226113098 IO-APIC-level eth2
27: 7950 34312 IO-APIC-level eth0
28: 2718 8097 IO-APIC-level aic7xxx
30: 0 0 IO-APIC-level acpi
NMI: 0 0
LOC: 6487240 6487239
ERR: 0
MIS: 0
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 15:26 ` Jeremy M. Guthrie
@ 2005-01-06 18:15 ` Robert Olsson
2005-01-06 19:35 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-06 18:15 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> I still don't see the rt_cache_stat file even under 2.4.28 stock. See below.
> ls -la /proc/net/stat/
> total 0
> -r--r--r-- 1 root root 0 Jan 6 09:22 rt_cache
rtstat is replaced by lnstat which is in iproute2 package:
http://developer.osdl.org/dev/iproute2/download/
(Old version ftp://robur.slu.se/pub/Linux/net-development/rt_cache_stat/rtstat.c)
> cat /proc/net/softnet_stat
> 96deb140 0032844e 00012bbb 00000fd8 00000000 00000000 00000000 00000000
> 00000dd4
> 00013dda 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> 0000041f
You only use CPU0 for packet processing. Also it seems you use the non-NAPI
version of e1000.
> cat /proc/interrupts
> CPU0 CPU1
> 0: 821852 5664906 IO-APIC-edge timer
> 1: 484 1247 IO-APIC-edge keyboard
> 8: 0 2 IO-APIC-edge rtc
> 14: 5 16 IO-APIC-edge ide0
> 18: 1928608529 1374 IO-APIC-level eth3
> 20: 1 226113098 IO-APIC-level eth2
> 27: 7950 34312 IO-APIC-level eth0
Traffic is flowing from eth3->eth2 Why all the interrupts on eth2/CPU1?
Is traffic most unidirectional?
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 18:15 ` Robert Olsson
@ 2005-01-06 19:35 ` Jeremy M. Guthrie
2005-01-06 20:29 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-06 19:35 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1.1: Type: text/plain, Size: 1938 bytes --]
On Thursday 06 January 2005 12:15 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > I still don't see the rt_cache_stat file even under 2.4.28 stock. See
> > below.
> >
> > ls -la /proc/net/stat/
> > total 0
> > -r--r--r-- 1 root root 0 Jan 6 09:22 rt_cache
>
> rtstat is replaced by lnstat which is in iproute2 package:
> http://developer.osdl.org/dev/iproute2/download/
>
> (Old version
> ftp://robur.slu.se/pub/Linux/net-development/rt_cache_stat/rtstat.c)
Please see the attachment.
> > cat /proc/net/softnet_stat
> > 96deb140 0032844e 00012bbb 00000fd8 00000000 00000000 00000000 00000000
> > 00000dd4
> > 00013dda 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> > 0000041f
>
> You only use CPU0 for packet processing. Also it seems you use the
> non-NAPI version of e1000.
The E1000 driver is the stock driver in 2.4.28.
>
> > cat /proc/interrupts
> > CPU0 CPU1
> > 0: 821852 5664906 IO-APIC-edge timer
> > 1: 484 1247 IO-APIC-edge keyboard
> > 8: 0 2 IO-APIC-edge rtc
> > 14: 5 16 IO-APIC-edge ide0
> > 18: 1928608529 1374 IO-APIC-level eth3
> > 20: 1 226113098 IO-APIC-level eth2
> > 27: 7950 34312 IO-APIC-level eth0
>
> Traffic is flowing from eth3->eth2 Why all the interrupts on eth2/CPU1?
> Is traffic most unidirectional?
eth2 is TX only. We don't receive anything on it. This system should only
ever RX on eth3 and TX on eth2 as part of its function. Eth0 is the
management interface on the host.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #1.2: results-2.4.28.txt --]
[-- Type: text/plain, Size: 26640 bytes --]
arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|
entries| allocs|destroys|hash_gro| lookups| hits|res_fail|rcv_prob|rcv_prob|periodic|forced_g|forced_g| entries| in_hit|in_slow_|in_slow_|in_no_ro| in_brd|in_marti|in_marti| out_hit|out_slow|out_slow|gc_total|gc_ignor|gc_goal_|gc_dst_o|in_hlist|out_hlis| entries| allocs|destroys|hash_gro| lookups| hits|res_fail|rcv_prob|rcv_prob|periodic|forced_g|forced_g|
| | | ws| | | ed|es_mcast|es_ucast|_gc_runs| c_runs|c_goal_m| | | tot| mc| ute| | an_dst| an_src| | _tot| _mc| | ed| miss| verflow| _search|t_search| | | | ws| | | ed|es_mcast|es_ucast|_gc_runs| c_runs|c_goal_m|
| | | | | | | | | | | iss| | | | | | | | | | | | | | | | | | | | | | | | | | | | | iss|
22| 9| 425| 1| 17202| 14599| 0| 0| 0| 75743| 0| 0| 61129| 6329| 20428| 0| 0| 1539| 0| 0| 8| 28| 0| 1677| 1675| 0| 0| 143038| 145| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4649| 0| 0|
22| 0| 0| 0| 1| 1| 0| 0| 0| 2| 0| 0| 62833| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 10| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 1| 1| 0| 0| 0| 1| 0| 0| 64306| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 65602| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 59707| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 60684| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 62249| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 62249| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 65000| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 58800| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 58800| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 62135| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 63677| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 63677| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 59013| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 59013| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 62462| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 62462| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 1| 0| 0| 0| 0| 4| 0| 0| 65225| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 1| 0| 0| 0| 0| 4| 0| 0| 65225| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|
entries| allocs|destroys|hash_gro| lookups| hits|res_fail|rcv_prob|rcv_prob|periodic|forced_g|forced_g| entries| in_hit|in_slow_|in_slow_|in_no_ro| in_brd|in_marti|in_marti| out_hit|out_slow|out_slow|gc_total|gc_ignor|gc_goal_|gc_dst_o|in_hlist|out_hlis| entries| allocs|destroys|hash_gro| lookups| hits|res_fail|rcv_prob|rcv_prob|periodic|forced_g|forced_g|
| | | ws| | | ed|es_mcast|es_ucast|_gc_runs| c_runs|c_goal_m| | | tot| mc| ute| | an_dst| an_src| | _tot| _mc| | ed| miss| verflow| _search|t_search| | | | ws| | | ed|es_mcast|es_ucast|_gc_runs| c_runs|c_goal_m|
| | | | | | | | | | | iss| | | | | | | | | | | | | | | | | | | | | | | | | | | | | iss|
24| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 60904| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 60904| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 63890| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 63890| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 59738| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 59738| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 4| 0| 0| 63346| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 13| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 4| 0| 0| 63346| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 13| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 0| 0| 0| 0| 4| 0| 0| 58798| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 13| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 0| 0| 0| 0| 4| 0| 0| 58798| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 13| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 1| 1| 0| 0| 0| 4| 0| 0| 62518| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 13| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 1| 1| 0| 0| 0| 4| 0| 0| 62518| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 13| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 2| 2| 0| 0| 0| 3| 0| 0| 65364| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 12| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 2| 2| 0| 0| 0| 3| 0| 0| 65364| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 12| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 2| 2| 0| 0| 0| 4| 0| 0| 61111| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 11| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 2| 2| 0| 0| 0| 4| 0| 0| 61111| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 11| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 4| 0| 0| 63152| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 6| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 4| 0| 0| 63152| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 6| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 4| 0| 0| 58223| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 4| 0| 0| 58223| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|arp_cach|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|clip_arp|
entries| allocs|destroys|hash_gro| lookups| hits|res_fail|rcv_prob|rcv_prob|periodic|forced_g|forced_g| entries| in_hit|in_slow_|in_slow_|in_no_ro| in_brd|in_marti|in_marti| out_hit|out_slow|out_slow|gc_total|gc_ignor|gc_goal_|gc_dst_o|in_hlist|out_hlis| entries| allocs|destroys|hash_gro| lookups| hits|res_fail|rcv_prob|rcv_prob|periodic|forced_g|forced_g|
| | | ws| | | ed|es_mcast|es_ucast|_gc_runs| c_runs|c_goal_m| | | tot| mc| ute| | an_dst| an_src| | _tot| _mc| | ed| miss| verflow| _search|t_search| | | | ws| | | ed|es_mcast|es_ucast|_gc_runs| c_runs|c_goal_m|
| | | | | | | | | | | iss| | | | | | | | | | | | | | | | | | | | | | | | | | | | | iss|
24| 0| 0| 0| 2| 2| 0| 0| 0| 4| 0| 0| 62044| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 15| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 2| 2| 0| 0| 0| 4| 0| 0| 62044| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 15| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 64932| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 64932| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
25| 0| 0| 0| 2| 2| 0| 0| 0| 4| 0| 0| 60712| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 14| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 1| 0| 1| 1| 0| 0| 0| 2| 0| 0| 62304| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 10| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 63758| 1| 1| 0| 0| 1| 0| 0| 0| 0| 0| 1| 1| 0| 0| 8| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 1| 0| 0| 64967| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 2| 0| 0| 58931| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 60836| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 6| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 2| 0| 0| 62574| 1| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 11| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 64121| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 1| 0| 0| 65615| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 10| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 2| 0| 0| 60057| 1| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 11| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
23| 0| 0| 0| 1| 1| 0| 0| 0| 1| 0| 0| 62008| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 1| 1| 0| 0| 0| 1| 0| 0| 63651| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 1| 1| 0| 0| 0| 1| 0| 0| 63651| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
22| 0| 0| 0| 1| 1| 0| 0| 0| 1| 0| 0| 63651| 0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
25| 0| 0| 0| 2| 2| 0| 0| 0| 6| 0| 0| 61300| 0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 12| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
24| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0| 0| 63087| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 19:35 ` Jeremy M. Guthrie
@ 2005-01-06 20:29 ` Robert Olsson
2005-01-06 20:54 ` Jeremy M. Guthrie
` (2 more replies)
0 siblings, 3 replies; 88+ messages in thread
From: Robert Olsson @ 2005-01-06 20:29 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> > You only use CPU0 for packet processing. Also it seems you use the
> > non-NAPI version of e1000.
> The E1000 driver is the stock driver in 2.4.28.
There is a kernel config option. Use Rx Polling (NAPI)
> eth2 is TX only. We don't receive anything on it. This system should only
> ever RX on eth3 and TX on eth2 as part of its function.
Ok! So unidirectional traffic and CPU0 processes all skb's and passes them
over to CPU1 which touches and frees them. You could try some other
affinity setting later on but there are so many options depending what
we want do. If we just want to compare 2.4/2.6 we can start out simple
with setting affinity so only one CPU. Which is almost what you got.
We reach all the complicated problems immediately... Now we pass skb's
between the CPU's which release cache bouncing and makes slab rebalance it's
pools. So using just one CPU is probably better... If we had incoming pkts
on eth2 eventually we could have had any use for CPU1.
Install NAPI-driver, set affinity so eth2/eth3 uses same CPU to start with.
The stats info you sent was almost impossible to read. See if you can get
only the rtstats.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 20:29 ` Robert Olsson
@ 2005-01-06 20:54 ` Jeremy M. Guthrie
2005-01-06 20:55 ` Jeremy M. Guthrie
2005-01-06 21:19 ` Jeremy M. Guthrie
2 siblings, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-06 20:54 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 1969 bytes --]
On Thursday 06 January 2005 02:29 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > > You only use CPU0 for packet processing. Also it seems you use the
> > > non-NAPI version of e1000.
> >
> > The E1000 driver is the stock driver in 2.4.28.
> There is a kernel config option. Use Rx Polling (NAPI)
I'm recompiling the module and will reload the module.
> > eth2 is TX only. We don't receive anything on it. This system should
> > only ever RX on eth3 and TX on eth2 as part of its function.
>
> Ok! So unidirectional traffic and CPU0 processes all skb's and passes them
> over to CPU1 which touches and frees them. You could try some other
> affinity setting later on but there are so many options depending what
> we want do. If we just want to compare 2.4/2.6 we can start out simple
> with setting affinity so only one CPU. Which is almost what you got.
> We reach all the complicated problems immediately... Now we pass skb's
> between the CPU's which release cache bouncing and makes slab rebalance
> it's pools. So using just one CPU is probably better... If we had incoming
> pkts on eth2 eventually we could have had any use for CPU1.
>
> Install NAPI-driver, set affinity so eth2/eth3 uses same CPU to start
> with.
I installed the NAPI driver and now I drop packets under 2.4.28. I tried
setting eth2/3 onto the same CPU. If both eth2/3 are on CPU0, I drop a lot
of packets. If CPU0-eth2 & CPU1-eth3, I still drop packets, just not as
fast.
I am going to try V2.6 w/o NAPI to see how much that helps.
> The stats info you sent was almost impossible to read. See if you can get
> only the rtstats.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 20:29 ` Robert Olsson
2005-01-06 20:54 ` Jeremy M. Guthrie
@ 2005-01-06 20:55 ` Jeremy M. Guthrie
2005-01-06 21:19 ` Jeremy M. Guthrie
2 siblings, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-06 20:55 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1.1: Type: text/plain, Size: 1664 bytes --]
Forgot the attachment.
On Thursday 06 January 2005 02:29 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > > You only use CPU0 for packet processing. Also it seems you use the
> > > non-NAPI version of e1000.
> >
> > The E1000 driver is the stock driver in 2.4.28.
>
> There is a kernel config option. Use Rx Polling (NAPI)
>
> > eth2 is TX only. We don't receive anything on it. This system should
> > only ever RX on eth3 and TX on eth2 as part of its function.
>
> Ok! So unidirectional traffic and CPU0 processes all skb's and passes them
> over to CPU1 which touches and frees them. You could try some other
> affinity setting later on but there are so many options depending what
> we want do. If we just want to compare 2.4/2.6 we can start out simple
> with setting affinity so only one CPU. Which is almost what you got.
> We reach all the complicated problems immediately... Now we pass skb's
> between the CPU's which release cache bouncing and makes slab rebalance
> it's pools. So using just one CPU is probably better... If we had incoming
> pkts on eth2 eventually we could have had any use for CPU1.
>
> Install NAPI-driver, set affinity so eth2/eth3 uses same CPU to start
> with.
>
> The stats info you sent was almost impossible to read. See if you can get
> only the rtstats.
>
> --ro
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #1.2: results-2.4.28.txt --]
[-- Type: text/plain, Size: 10005 bytes --]
rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|
in_hit|in_slow_|in_slow_|in_no_ro| in_brd|in_marti|in_marti| out_hit|out_slow|out_slow|gc_total|gc_ignor|gc_goal_|gc_dst_o|in_hlist|out_hlis|
| tot| mc| ute| | an_dst| an_src| | _tot| _mc| | ed| miss| verflow| _search|t_search|
9092| 21888| 0| 0| 1656| 0| 0| 9| 42| 0| 1836| 1833| 0| 0| 158622| 206|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 1| 1| 0| 0| 9| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 1| 1| 0| 0| 9| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 14| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 14| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|
in_hit|in_slow_|in_slow_|in_no_ro| in_brd|in_marti|in_marti| out_hit|out_slow|out_slow|gc_total|gc_ignor|gc_goal_|gc_dst_o|in_hlist|out_hlis|
| tot| mc| ute| | an_dst| an_src| | _tot| _mc| | ed| miss| verflow| _search|t_search|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 9| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 9| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 9| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 9| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 9| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 9| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 9| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 9| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 9| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0|
0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 1| 1| 0| 0| 14| 0|
0| 2| 0| 0| 0| 0| 0| 0| 0| 0| 1| 1| 0| 0| 14| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 7| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 3| 0|
rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|rt_cache|
in_hit|in_slow_|in_slow_|in_no_ro| in_brd|in_marti|in_marti| out_hit|out_slow|out_slow|gc_total|gc_ignor|gc_goal_|gc_dst_o|in_hlist|out_hlis|
| tot| mc| ute| | an_dst| an_src| | _tot| _mc| | ed| miss| verflow| _search|t_search|
1| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 16| 0|
1| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 16| 0|
1| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 16| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 8| 0|
0| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 8| 0|
2| 2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 16| 0|
1| 1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 8| 0|
1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 6| 0|
1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 6| 0|
1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 6| 0|
2| 2| 0| 0| 1| 0| 0| 0| 0| 0| 1| 1| 0| 0| 24| 0|
1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 2| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
1| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0|
0| 2| 0| 0| 1| 0| 0| 0| 0| 0| 1| 1| 0| 0| 16| 0|
0| 2| 0| 0| 1| 0| 0| 0| 0| 0| 1| 1| 0| 0| 16| 0|
2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0|
2| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 4| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0| 0|
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 20:29 ` Robert Olsson
2005-01-06 20:54 ` Jeremy M. Guthrie
2005-01-06 20:55 ` Jeremy M. Guthrie
@ 2005-01-06 21:19 ` Jeremy M. Guthrie
2005-01-06 21:36 ` Robert Olsson
2 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-06 21:19 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 2005 bytes --]
Pulling Rx Polling has kept V2.6 from dropping packets. argh?!?!?! I should
have caught that sooner. 8( Well, I have a window of opportunity here. If
you want I can still provide a broken environment but that seems pointless
while NAPI is not acting right... what is evident is that this will run
better till we start to bump up against the top of CPU0.
On Thursday 06 January 2005 02:29 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > > You only use CPU0 for packet processing. Also it seems you use the
> > > non-NAPI version of e1000.
> >
> > The E1000 driver is the stock driver in 2.4.28.
>
> There is a kernel config option. Use Rx Polling (NAPI)
>
> > eth2 is TX only. We don't receive anything on it. This system should
> > only ever RX on eth3 and TX on eth2 as part of its function.
>
> Ok! So unidirectional traffic and CPU0 processes all skb's and passes them
> over to CPU1 which touches and frees them. You could try some other
> affinity setting later on but there are so many options depending what
> we want do. If we just want to compare 2.4/2.6 we can start out simple
> with setting affinity so only one CPU. Which is almost what you got.
> We reach all the complicated problems immediately... Now we pass skb's
> between the CPU's which release cache bouncing and makes slab rebalance
> it's pools. So using just one CPU is probably better... If we had incoming
> pkts on eth2 eventually we could have had any use for CPU1.
>
> Install NAPI-driver, set affinity so eth2/eth3 uses same CPU to start
> with.
>
> The stats info you sent was almost impossible to read. See if you can get
> only the rtstats.
>
> --ro
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 21:19 ` Jeremy M. Guthrie
@ 2005-01-06 21:36 ` Robert Olsson
2005-01-06 21:46 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-06 21:36 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> Pulling Rx Polling has kept V2.6 from dropping packets. argh?!?!?!
Thats fine...
> I should have caught that sooner. 8( Well, I have a window of opportunity
> here. If you want I can still provide a broken environment but that seems
> pointless while NAPI is not acting right... what is evident is that this
> will run better till we start to bump up against the top of CPU0.
I don't follow...
The stats didn't show any numbers so we don't know your load.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 21:36 ` Robert Olsson
@ 2005-01-06 21:46 ` Jeremy M. Guthrie
2005-01-06 22:11 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-06 21:46 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 1016 bytes --]
On Thursday 06 January 2005 03:36 pm, Robert Olsson wrote:
> > I should have caught that sooner. 8( Well, I have a window of
> > opportunity here. If you want I can still provide a broken environment
> > but that seems pointless while NAPI is not acting right... what is
> > evident is that this will run better till we start to bump up against
> > the top of CPU0.
>
> I don't follow...
Technically by turning off NAPI, I have 'solved' my short term packet loss
problem.
However I can still provide you a window to go through further troubleshooting
if it is beneficial.
> The stats didn't show any numbers so we don't know your load.
Are you referring to the # of entries in the rt_cache table?
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 21:46 ` Jeremy M. Guthrie
@ 2005-01-06 22:11 ` Robert Olsson
2005-01-06 22:18 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-06 22:11 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> > I don't follow...
> Technically by turning off NAPI, I have 'solved' my short term packet loss
> problem.
You need NAPI of course just wait until your packet load increases just
a little bit... I saw drops in your 2.4 /proc/net/softnet_stat which
indicates you are close to your system performance. With NAPI you keep
up your system system performance regardless of incoming load.
The e1000 driver has some bugs in "your setup" as it enables irq's when
there is only RX and no TX and should have irq's disabled. I sent patch
to intel.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 22:11 ` Robert Olsson
@ 2005-01-06 22:18 ` Jeremy M. Guthrie
2005-01-06 22:35 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-06 22:18 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 1292 bytes --]
On Thursday 06 January 2005 04:11 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > > I don't follow...
> >
> > Technically by turning off NAPI, I have 'solved' my short term packet
> > loss problem.
>
> You need NAPI of course just wait until your packet load increases just
> a little bit...
I agree. Running mpstat now shows that even w/o packet drops, I'm running
2-5% free CPU.
> I saw drops in your 2.4 /proc/net/softnet_stat which
> indicates you are close to your system performance.
Should those drops show up in RX drops in ifconfig?
> With NAPI you keep
> up your system system performance regardless of incoming load.
You mentioned before that " The stats didn't show any numbers so we don't know
your load." Was there a command you wanted me to re-run?
> The e1000 driver has some bugs in "your setup" as it enables irq's when
> there is only RX and no TX and should have irq's disabled. I sent patch
> to intel.
>
> --ro
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 22:18 ` Jeremy M. Guthrie
@ 2005-01-06 22:35 ` Robert Olsson
2005-01-07 16:17 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-06 22:35 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> You mentioned before that " The stats didn't show any numbers so we don't
> know your load." Was there a command you wanted me to re-run?
rtstat should show the routing/packet load.
From a systems like yours 2*933 MHz PIII in production for tens of thoundans
of users many filter and full BGP routing. Current (late here) use.
ifstat2 eth*
RX -------------------------- TX -------------------------
eth0 272.8 M bit/s 51 k pps 350.7 M bit/s 51 k pps
eth1 371.9 M bit/s 55 k pps 293.6 M bit/s 55 k pps
eth2 6.7 M bit/s 1348 pps 3.0 M bit/s 991 pps
eth3 472 bit/s 0 pps 600 bit/s 0 pps
rtstat
size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc GC: tot ignored goal_miss ovrf HASH: in_search out_search
21007 114060 748 0 5 0 0 0 11 5 0 0 0 0 0 58280 4
22683 112556 827 0 6 0 0 0 5 5 0 0 0 0 0 60841 7
24230 111083 765 0 4 0 0 0 13 4 0 0 0 0 0 66628 7
Around 110 kpps hitting warm cache entries and ~800 new lookups/sec. I was
expecting so see something similar from your system.
FYI.
cat /proc/net/softnet_stat
9ba490f3 00000000 01281572 00000000 00000000 00000000 00000000 00000000 002562c2
9939268d 00000000 010e42e9 00000000 00000000 00000000 00000000 00000000 0028fe72
Good Night.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-06 22:35 ` Robert Olsson
@ 2005-01-07 16:17 ` Jeremy M. Guthrie
2005-01-07 19:18 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-07 16:17 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 3111 bytes --]
On Thursday 06 January 2005 04:35 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > You mentioned before that " The stats didn't show any numbers so we
> > don't know your load." Was there a command you wanted me to re-run?
>
> rtstat should show the routing/packet load.
>
> From a systems like yours 2*933 MHz PIII in production for tens of
> thoundans of users many filter and full BGP routing. Current (late here)
> use.
>
>
> ifstat2 eth*
> RX -------------------------- TX -------------------------
> eth0 272.8 M bit/s 51 k pps 350.7 M bit/s 51 k pps
> eth1 371.9 M bit/s 55 k pps 293.6 M bit/s 55 k pps
> eth2 6.7 M bit/s 1348 pps 3.0 M bit/s 991 pps
> eth3 472 bit/s 0 pps 600 bit/s 0 pps
Ifstat output:
#kernel
Interface RX Pkts/Rate TX Pkts/Rate RX Data/Rate TX Data/Rate
RX Errs/Drop TX Errs/Drop RX Over/Rate TX Coll/Rate
lo 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
eth0 1 0 1 0 66 0 178 0
0 0 0 0 0 0 0 0
eth2 0 0 30770 0 0 0 13361K 0
0 0 0 0 0 0 0 0
eth3 81019 0 0 0 41740K 0 0 0
0 0 0 0 0 0 0 0
> rtstat
> size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot
> mc GC: tot ignored goal_miss ovrf HASH: in_search out_search 21007
> 114060 748 0 5 0 0 0 11 5 0
> 0 0 0 0 58280 4 22683 112556 827
> 0 6 0 0 0 5 5 0 0 0
> 0 0 60841 7 24230 111083 765 0 4
> 0 0 0 13 4 0 0 0 0 0
> 66628 7
> Around 110 kpps hitting warm cache entries and ~800 new lookups/sec. I was
> expecting so see something similar from your system.
Did my second email w/ the lnstat data not make it?
>
> FYI.
> cat /proc/net/softnet_stat
total droppped tsquz Throttl FR_hit FR_succe FR_defer FR_def_o
cpu_coll
> 9ba490f3 00000000 01281572 00000000 00000000 00000000 00000000 00000000
> 002562c2 9939268d 00000000 010e42e9 00000000 00000000 00000000 00000000
> 00000000 0028fe72
Why do these drops not show up in the interface drop?
> Good Night.
>
>
> --ro
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 16:17 ` Jeremy M. Guthrie
@ 2005-01-07 19:18 ` Robert Olsson
2005-01-07 19:38 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-07 19:18 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> Did my second email w/ the lnstat data not make it?
I see most zeroes
> > cat /proc/net/softnet_stat
> total droppped tsquz Throttl FR_hit FR_succe FR_defer FR_def_o
> cpu_coll
> > 9ba490f3 00000000 01281572 00000000 00000000 00000000 00000000 00000000
> > 002562c2 9939268d 00000000 010e42e9 00000000 00000000 00000000 00000000
> > 00000000 0028fe72
> Why do these drops not show up in the interface drop?
These are drops in the network stack not in the devices.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 19:18 ` Robert Olsson
@ 2005-01-07 19:38 ` Jeremy M. Guthrie
2005-01-07 20:07 ` Robert Olsson
2005-01-07 20:14 ` Jeremy M. Guthrie
0 siblings, 2 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-07 19:38 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 1043 bytes --]
On Friday 07 January 2005 01:18 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > Did my second email w/ the lnstat data not make it?
> I see most zeroes
Is there a particular duration and count you want me to run for? The snapshot
should have been 60 one-second snapshots.
> > > cat /proc/net/softnet_stat
> >
> > total droppped tsquz Throttl FR_hit FR_succe FR_defer FR_def_o
> > cpu_coll
> >
> > > 9ba490f3 00000000 01281572 00000000 00000000 00000000 00000000
> > > 00000000 002562c2 9939268d 00000000 010e42e9 00000000 00000000
> > > 00000000 00000000 00000000 0028fe72
> >
> > Why do these drops not show up in the interface drop?
>
> These are drops in the network stack not in the devices.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 19:38 ` Jeremy M. Guthrie
@ 2005-01-07 20:07 ` Robert Olsson
2005-01-07 20:14 ` Jeremy M. Guthrie
1 sibling, 0 replies; 88+ messages in thread
From: Robert Olsson @ 2005-01-07 20:07 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> Is there a particular duration and count you want me to run for?
> The snapshot should have been 60 one-second snapshots.
That should be OK as we would see one GC going and some flows recreated
after this.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 19:38 ` Jeremy M. Guthrie
2005-01-07 20:07 ` Robert Olsson
@ 2005-01-07 20:14 ` Jeremy M. Guthrie
2005-01-07 20:40 ` Robert Olsson
2005-01-07 22:28 ` Jesse Brandeburg
1 sibling, 2 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-07 20:14 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 2638 bytes --]
I just updated the Intel Drivers to the latest on source-forge(5.6.10.1). I
now see lower CPU usage but I am still dropping.
During a 60 second window the machine received 5,110,164 packets and dropped
20461 or roughly 0.4% packet loss.
cat /proc/net/softnet_stat
2f9c9bd0 150dc67c 00c4701b 000d2659 00000000 00000000 00000000 00000000
00097049
00010f9b 00000000 0000003a 00000000 00000000 00000000 00000000 00000000
000002fc
It has been at 150dc67c for a while now. So while I am dropping at the card,
I am not dropping in the stack.
mpstat -P ALL 2
Linux 2.6.10 (h-idspr-msn-1) 01/07/05
14:06:45 CPU %user %nice %system %iowait %irq %soft %idle
intr/s
14:06:47 all 0.00 0.00 0.00 0.75 1.74 43.03 54.48
22598.51
14:06:47 0 0.00 0.00 0.00 0.00 2.49 78.61 18.91
17570.65
14:06:47 1 0.00 0.00 0.00 1.49 1.49 7.46 90.05
5023.88
14:06:47 CPU %user %nice %system %iowait %irq %soft %idle
intr/s
14:06:49 all 0.00 0.00 0.00 0.00 1.50 45.75 52.75
19352.00
14:06:49 0 0.00 0.00 0.00 0.00 2.50 83.00 14.50
14467.00
14:06:49 1 0.00 0.00 0.00 0.00 0.50 9.00 90.50
4886.50
14:06:49 CPU %user %nice %system %iowait %irq %soft %idle
intr/s
14:06:51 all 0.00 0.00 0.00 0.00 1.75 42.50 55.75
22482.59
14:06:51 0 0.00 0.00 0.00 0.00 2.49 77.61 19.90
17572.64
14:06:51 1 0.00 0.00 0.50 0.00 1.00 6.97 91.54
4919.90
14:06:51 CPU %user %nice %system %iowait %irq %soft %idle
intr/s
14:06:53 all 0.00 0.00 0.25 0.00 1.99 43.03 54.73
22458.00
14:06:53 0 0.00 0.00 0.00 0.00 3.00 78.50 18.50
17456.00
14:06:53 1 0.00 0.00 0.00 0.00 0.50 8.00 91.00
4992.50
14:06:53 CPU %user %nice %system %iowait %irq %soft %idle
intr/s
14:06:55 all 0.00 0.00 0.00 0.00 1.75 42.75 55.50
22854.00
14:06:55 0 0.00 0.00 0.00 0.00 3.00 77.00 20.00
17838.00
14:06:55 1 0.00 0.00 0.00 0.00 1.00 8.50 91.00
5012.50
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 20:14 ` Jeremy M. Guthrie
@ 2005-01-07 20:40 ` Robert Olsson
2005-01-07 21:06 ` Jeremy M. Guthrie
2005-01-07 22:28 ` Jesse Brandeburg
1 sibling, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-07 20:40 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> During a 60 second window the machine received 5,110,164 packets and
> dropped 20461 or roughly 0.4% packet loss.
Around 85 kpps. If you run rtstat we could a feeling how may slow-pathx
that are taken. Or save the /proc/net/stat/rt_cache before you 60 sec
run.
mpstat I don't trust in this context.
> It has been at 150dc67c for a while now. So while I am dropping at the
> card, I am not dropping in the stack.
You use NAPI driver then...
Check if the patch below is in your e1000 driver.
--ro
--- drivers/net/e1000/e1000_main.c.orig 2004-02-16 14:46:16.000000000 +0100
+++ drivers/net/e1000/e1000_main.c 2004-02-16 15:45:05.000000000 +0100
@@ -2161,19 +2161,21 @@
struct e1000_adapter *adapter = netdev->priv;
int work_to_do = min(*budget, netdev->quota);
int work_done = 0;
-
- e1000_clean_tx_irq(adapter);
+ static boolean_t tx_cleaned;
+
+ tx_cleaned = e1000_clean_tx_irq(adapter);
e1000_clean_rx_irq(adapter, &work_done, work_to_do);
*budget -= work_done;
netdev->quota -= work_done;
- if(work_done < work_to_do || !netif_running(netdev)) {
+ if( (!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
netif_rx_complete(netdev);
e1000_irq_enable(adapter);
+ return 0;
}
- return (work_done >= work_to_do);
+ return 1;
}
#endif
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 20:40 ` Robert Olsson
@ 2005-01-07 21:06 ` Jeremy M. Guthrie
2005-01-07 21:30 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-07 21:06 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 3962 bytes --]
On Friday 07 January 2005 02:40 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > During a 60 second window the machine received 5,110,164 packets and
> > dropped 20461 or roughly 0.4% packet loss.
>
> Around 85 kpps. If you run rtstat we could a feeling how may slow-pathx
> that are taken.
I can't run rstat. The rt_cache_stat file doesn't exist in /proc/net
or /proc/net/stat in either V2.4 or V2.6.
> Or save the /proc/net/stat/rt_cache before you 60 sec
> run.
lnstat isn't giving me anything other than zeros.
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
0000f91e 15b1ddab 0b69a32c 00000000 00000000 00001b0a 00001fbd 00000000
00066f65 0000b943 00000000 087b80a1 0878e81a 00000256 00000000 ca08c77d
0020e549
0000f91e 00005c59 0000b3ef 00000000 00000000 00000cf7 00000000 00000000
0000000f 0000004c 00000002 00000fb9 00000fb5 00000000 00000000 0004e8d9
000000b2
60 seconds......
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
0000fb3e 15ff4696 0b6bbb65 00000000 00000000 00001b10 00001fc1 00000000
00067111 0000b974 00000000 087d9903 087b0003 00000256 00000000 cb58b399
0020ee79
0000fb3e 00005c5b 0000b40d 00000000 00000000 00000cf9 00000000 00000000
0000000f 0000004c 00000002 00000fbb 00000fb7 00000000 00000000 0004e9ba
000000b2
> mpstat I don't trust in this context.
>
> > It has been at 150dc67c for a while now. So while I am dropping at the
> > card, I am not dropping in the stack.
>
> You use NAPI driver then...
> Check if the patch below is in your e1000 driver.
> --ro
The drivers should be built as NAPI.
Here is the snippet:
static int
e1000_clean(struct net_device *netdev, int *budget)
{
struct e1000_adapter *adapter = netdev->priv;
int work_to_do = min(*budget, netdev->quota);
int tx_cleaned;
int work_done = 0;
if (!netif_carrier_ok(netdev))
goto quit_polling;
tx_cleaned = e1000_clean_tx_irq(adapter);
e1000_clean_rx_irq(adapter, &work_done, work_to_do);
*budget -= work_done;
netdev->quota -= work_done;
/* if no Rx and Tx cleanup work was done, exit the polling mode */
if(!tx_cleaned || (work_done < work_to_do) ||
!netif_running(netdev)) {
quit_polling: netif_rx_complete(netdev);
e1000_irq_enable(adapter);
return 0;
}
return (work_done >= work_to_do);
}
>
> --- drivers/net/e1000/e1000_main.c.orig 2004-02-16 14:46:16.000000000
> +0100 +++ drivers/net/e1000/e1000_main.c 2004-02-16 15:45:05.000000000
> +0100 @@ -2161,19 +2161,21 @@
> struct e1000_adapter *adapter = netdev->priv;
> int work_to_do = min(*budget, netdev->quota);
> int work_done = 0;
> -
> - e1000_clean_tx_irq(adapter);
> + static boolean_t tx_cleaned;
> +
> + tx_cleaned = e1000_clean_tx_irq(adapter);
> e1000_clean_rx_irq(adapter, &work_done, work_to_do);
>
> *budget -= work_done;
> netdev->quota -= work_done;
>
> - if(work_done < work_to_do || !netif_running(netdev)) {
> + if( (!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
> netif_rx_complete(netdev);
> e1000_irq_enable(adapter);
> + return 0;
> }
>
> - return (work_done >= work_to_do);
> + return 1;
> }
> #endif
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 21:06 ` Jeremy M. Guthrie
@ 2005-01-07 21:30 ` Robert Olsson
2005-01-11 15:11 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-07 21:30 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
Jeremy M. Guthrie writes:
> lnstat isn't giving me anything other than zeros.
Crap.
> entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
> out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
> gc_dst_overflow in_hlist_search out_hlist_search
> 0000f91e 15b1ddab 0b69a32c 00000000 00000000 00001b0a 00001fbd 00000000
> 00066f65 0000b943 00000000 087b80a1 0878e81a 00000256 00000000 ca08c77d
> 0020e549
> 0000fb3e 15ff4696 0b6bbb65 00000000 00000000 00001b10 00001fc1 00000000
> 00067111 0000b974 00000000 087d9903 087b0003 00000256 00000000 cb58b399
> 0020ee79
85 kpps and 2287 lookups/sec as a 60 sec average. Pretty nice load yes.
Where did you get the load?
Try see if you fix lnstat :-) it would be nice to see the route dynamics.
Or use the try the rtstat I pointed to.
And apply the e1000 patch I sent and make a test run.
I'll have a beer and give up for tonight...
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 20:14 ` Jeremy M. Guthrie
2005-01-07 20:40 ` Robert Olsson
@ 2005-01-07 22:28 ` Jesse Brandeburg
2005-01-07 22:50 ` Jeremy M. Guthrie
1 sibling, 1 reply; 88+ messages in thread
From: Jesse Brandeburg @ 2005-01-07 22:28 UTC (permalink / raw)
To: Jeremy M. Guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger
On Fri, 7 Jan 2005, Jeremy M. Guthrie wrote:
> I just updated the Intel Drivers to the latest on source-forge(5.6.10.1). I
> now see lower CPU usage but I am still dropping.
>
> During a 60 second window the machine received 5,110,164 packets and dropped
> 20461 or roughly 0.4% packet loss.
>
NAPI will cause a very busy stack to force the network card to drop the
data instead of the stack. This is supposed to be a good thing because
the hardware will be forced to drop the packet (hopefully) instead of
interrupt rate thrashing your machine right when it needs the cpu to do
other stuff. This is supposed to be the saving grace of NAPI.
So, not to distract from the conversation, but in the interest (my
interest :-) ) of tuning your E1000, can you please send the output of
ethtool -S ethX for each interface? This will help us figure out if there
is anything to tune in the driver (like number of rx buffers, etc)
Thanks,
Jesse
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 22:28 ` Jesse Brandeburg
@ 2005-01-07 22:50 ` Jeremy M. Guthrie
2005-01-07 22:57 ` Stephen Hemminger
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-07 22:50 UTC (permalink / raw)
To: netdev; +Cc: Jesse Brandeburg, Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 3369 bytes --]
On Friday 07 January 2005 04:28 pm, Jesse Brandeburg wrote:
> On Fri, 7 Jan 2005, Jeremy M. Guthrie wrote:
> > I just updated the Intel Drivers to the latest on source-forge(5.6.10.1).
> > I now see lower CPU usage but I am still dropping.
> >
> > During a 60 second window the machine received 5,110,164 packets and
> > dropped 20461 or roughly 0.4% packet loss.
>
> NAPI will cause a very busy stack to force the network card to drop the
> data instead of the stack. This is supposed to be a good thing because
> the hardware will be forced to drop the packet (hopefully) instead of
> interrupt rate thrashing your machine right when it needs the cpu to do
> other stuff. This is supposed to be the saving grace of NAPI.
Makes sense.
> So, not to distract from the conversation, but in the interest (my
> interest :-) ) of tuning your E1000, can you please send the output of
> ethtool -S ethX for each interface? This will help us figure out if there
> is anything to tune in the driver (like number of rx buffers, etc)
ethtool -S eth2
NIC statistics:
rx_packets: 0
tx_packets: 314550698
rx_bytes: 0
tx_bytes: 4290523139
rx_errors: 0
tx_errors: 0
rx_dropped: 0
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 0
rx_missed_errors: 0
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 0
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 0
rx_csum_offload_good: 0
rx_csum_offload_errors: 0
ethtool -S eth3
NIC statistics:
rx_packets: 719847127
tx_packets: 5
rx_bytes: 1880301945
tx_bytes: 398
rx_errors: 3368295
tx_errors: 0
rx_dropped: 1478044
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 1890251
rx_missed_errors: 1890251
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 0
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 379837423993
rx_csum_offload_good: 672891990
rx_csum_offload_errors: 178666
> Thanks,
> Jesse
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 22:50 ` Jeremy M. Guthrie
@ 2005-01-07 22:57 ` Stephen Hemminger
2005-01-11 15:17 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Stephen Hemminger @ 2005-01-07 22:57 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Jesse Brandeburg, Robert Olsson
On Fri, 7 Jan 2005 16:50:51 -0600
"Jeremy M. Guthrie" <jeremy.guthrie@berbee.com> wrote:
> On Friday 07 January 2005 04:28 pm, Jesse Brandeburg wrote:
> > On Fri, 7 Jan 2005, Jeremy M. Guthrie wrote:
> > > I just updated the Intel Drivers to the latest on source-forge(5.6.10.1).
> > > I now see lower CPU usage but I am still dropping.
> > >
> > > During a 60 second window the machine received 5,110,164 packets and
> > > dropped 20461 or roughly 0.4% packet loss.
> >
> > NAPI will cause a very busy stack to force the network card to drop the
> > data instead of the stack. This is supposed to be a good thing because
> > the hardware will be forced to drop the packet (hopefully) instead of
> > interrupt rate thrashing your machine right when it needs the cpu to do
> > other stuff. This is supposed to be the saving grace of NAPI.
> Makes sense.
Not sure if NAPI makes sense on transmit because it causes the transmit
ring to grow and freeing the transmit skb should be quick. Perhaps
other interrupt moderation schemes work better.
On receive the processing could take longer and NAPI is a real win.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
[not found] <200501071619.54566.jeremy.guthrie@berbee.com>
@ 2005-01-07 23:23 ` Jesse Brandeburg
2005-01-10 21:11 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Jesse Brandeburg @ 2005-01-07 23:23 UTC (permalink / raw)
To: Jeremy M. Guthrie; +Cc: Brandeburg, Jesse, netdev, Robert.Olsson, shemminger
On Fri, 7 Jan 2005, Jeremy M. Guthrie wrote:
> > Very interesting that with your workload the napi driver doesn't keep up
> > (from looking at your posts in netdev) and yet the interrupt mode driver
> > can.
> Well, I need to do more digging. One scenario the interrupt mode can hand
> stuff off to the CPU but the network stack still drops. The other scenario,
> the card drops.
>
> ethtool -S eth2
> NIC statistics:
<snip!>
> tx_packets: 314550698
> tx_bytes: 4290523139
> ethtool -S eth3
> NIC statistics:
<snip!>
> rx_packets: 719847127
> tx_packets: 5
> rx_bytes: 1880301945
> tx_bytes: 398
> rx_errors: 3368295
> rx_dropped: 1478044
> rx_fifo_errors: 1890251
> rx_missed_errors: 1890251
> rx_long_byte_count: 379837423993
> rx_csum_offload_good: 672891990
> rx_csum_offload_errors: 178666
Okay, well, rx_dropped means "receive no buffers count" in our driver, but
doesn't necessarily mean that the packet was completely dropped it just
means that the fifo may have had to queue up the packet on the adapter as
best it could and wait for the OS to provide more skb's, this may or may
not lead to further errors...
Now, the rx_fifo errors is from hardware counter "missed packet count"
which means that a packet was dropped because the fifo was full (probably
indicated at least some of the time because the card was in "receive no
buffers" state) btw fifo errors and rx_missed are both being fed by the
same hardware counter.
rx_csum_offload_errors is somewhat worrisome because it means that you're
receiving packets that appear to be corrupted. This could be from any
number of sources, but is most likely a misconfigured station on your lan
or something is corrupting checksums (a tcpdump would help debug here, but
would really slow things down) The packets are NOT dropped, but handed to
the stack to decide what to do.
So, to mitigate the rnbc "receive no buffers count" a little you can
provide some more buffering on the receive side by loading the module with
RxDescriptors=2048 or something larger than the default of 256. this will
help (if you haven't already) but will also probably increase the amount
of work your system has to do, as more packets will be able to be stored
up.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 23:23 ` Jesse Brandeburg
@ 2005-01-10 21:11 ` Jeremy M. Guthrie
0 siblings, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-10 21:11 UTC (permalink / raw)
To: netdev; +Cc: Jesse Brandeburg, Robert.Olsson, shemminger
[-- Attachment #1: Type: text/plain, Size: 6206 bytes --]
I rebound the card with 2048 RxDescriptors. There appears to be a burst
errors right when the port comes online. Down below is an ifconfig ; sleep
60 ; ifconfig.
I'll be tesitng Robert's patch shortly.
ethtool -S eth2
NIC statistics:
rx_packets: 0
tx_packets: 7295976
rx_bytes: 0
tx_bytes: 3413882143
rx_errors: 0
tx_errors: 0
rx_dropped: 0
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 0
rx_missed_errors: 0
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 0
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 0
rx_csum_offload_good: 0
rx_csum_offload_errors: 0
ethtool -S eth3
NIC statistics:
rx_packets: 19231078
tx_packets: 5
rx_bytes: 1350479823
tx_bytes: 398
rx_errors: 522159
tx_errors: 0
rx_dropped: 320198
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 201961
rx_missed_errors: 201961
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 0
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 9940414415
rx_csum_offload_good: 17879204
rx_csum_offload_errors: 8537
Mon Jan 10 15:06:26 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:67183383 errors:583215 dropped:583215 overruns:247258
frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1120957705 (1069.0 Mb) TX bytes:398 (398.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
Mon Jan 10 15:07:26 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:73275567 errors:616520 dropped:616520 overruns:262517
frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:337437991 (321.8 Mb) TX bytes:398 (398.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
On Friday 07 January 2005 05:23 pm, Jesse Brandeburg wrote:
> On Fri, 7 Jan 2005, Jeremy M. Guthrie wrote:
> > > Very interesting that with your workload the napi driver doesn't keep
> > > up (from looking at your posts in netdev) and yet the interrupt mode
> > > driver can.
> >
> > Well, I need to do more digging. One scenario the interrupt mode can
> > hand stuff off to the CPU but the network stack still drops. The other
> > scenario, the card drops.
> >
> > ethtool -S eth2
> > NIC statistics:
>
> <snip!>
>
> > tx_packets: 314550698
> > tx_bytes: 4290523139
> > ethtool -S eth3
> > NIC statistics:
>
> <snip!>
>
> > rx_packets: 719847127
> > tx_packets: 5
> > rx_bytes: 1880301945
> > tx_bytes: 398
> > rx_errors: 3368295
> > rx_dropped: 1478044
> > rx_fifo_errors: 1890251
> > rx_missed_errors: 1890251
> > rx_long_byte_count: 379837423993
> > rx_csum_offload_good: 672891990
> > rx_csum_offload_errors: 178666
>
> Okay, well, rx_dropped means "receive no buffers count" in our driver, but
> doesn't necessarily mean that the packet was completely dropped it just
> means that the fifo may have had to queue up the packet on the adapter as
> best it could and wait for the OS to provide more skb's, this may or may
> not lead to further errors...
>
> Now, the rx_fifo errors is from hardware counter "missed packet count"
> which means that a packet was dropped because the fifo was full (probably
> indicated at least some of the time because the card was in "receive no
> buffers" state) btw fifo errors and rx_missed are both being fed by the
> same hardware counter.
>
> rx_csum_offload_errors is somewhat worrisome because it means that you're
> receiving packets that appear to be corrupted. This could be from any
> number of sources, but is most likely a misconfigured station on your lan
> or something is corrupting checksums (a tcpdump would help debug here, but
> would really slow things down) The packets are NOT dropped, but handed to
> the stack to decide what to do.
>
> So, to mitigate the rnbc "receive no buffers count" a little you can
> provide some more buffering on the receive side by loading the module with
> RxDescriptors=2048 or something larger than the default of 256. this will
> help (if you haven't already) but will also probably increase the amount
> of work your system has to do, as more packets will be able to be stored
> up.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 21:30 ` Robert Olsson
@ 2005-01-11 15:11 ` Jeremy M. Guthrie
0 siblings, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-11 15:11 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger
[-- Attachment #1: Type: text/plain, Size: 799 bytes --]
On Friday 07 January 2005 03:30 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> 85 kpps and 2287 lookups/sec as a 60 sec average. Pretty nice load yes.
> Where did you get the load?
> Try see if you fix lnstat :-) it would be nice to see the route dynamics.
> Or use the try the rtstat I pointed to.
>
> And apply the e1000 patch I sent and make a test run.
The code Jesse Brandeburg had me download has the patch in it.
> I'll have a beer and give up for tonight...
>
> --ro
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-07 22:57 ` Stephen Hemminger
@ 2005-01-11 15:17 ` Jeremy M. Guthrie
2005-01-11 16:40 ` Robert Olsson
2005-01-11 17:17 ` Jeremy M. Guthrie
0 siblings, 2 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-11 15:17 UTC (permalink / raw)
To: netdev; +Cc: Stephen Hemminger, Jesse Brandeburg, Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 3052 bytes --]
date ; ifconfig eth3 ; cat /proc/net/softnet_stat ;
cat /proc/net/stat/rt_cache ; sleep 60 ; date ; ifconfig eth3 ;
cat /proc/net/softnet_stat ; cat /proc/net/stat/rt_cache
Tue Jan 11 09:12:21 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3519452697 errors:5558592 dropped:5558592
overruns:4011523 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:497775695 (474.7 Mb) TX bytes:398 (398.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
f59427dc 150dc67c 00c562ab 000d2659 00000000 00000000 00000000 00000000
00547b5b
00038622 00000000 00000065 00000000 00000000 00000000 00000000 00000000
0006804f
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
0000f3c2 aeca1565 272c383d 00000000 00000000 000089a8 00005298 00000002
001db403 0003054f 00000000 230aa73c 22fe33eb 00000cf0 00000000 b37011dc
00957c14
0000f3c2 0000b975 00029703 00000000 00000000 000035a5 00000000 00000000
00000015 00000083 00000002 000038c8 000038a3 00000000 00000000 0012a566
0000014d
Tue Jan 11 09:13:21 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3524302561 errors:5571396 dropped:5571396
overruns:4022383 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3065327250 (2923.3 Mb) TX bytes:398 (398.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
f5de26f5 150dc67c 00c562b4 000d2659 00000000 00000000 00000000 00000000
00547ffd
00038632 00000000 00000065 00000000 00000000 00000000 00000000 00000000
0006807c
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
0000fc1e af11f07b 272e58d3 00000000 00000000 000089a8 00005298 00000002
001db571 00030572 00000000 230cc7f1 23005428 00000cf0 00000000 b4b5b529
009583b9
0000fc1e 0000b977 00029710 00000000 00000000 000035a7 00000000 00000000
00000015 00000083 00000002 000038ca 000038a5 00000000 00000000 0012a5c8
0000014d
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-11 15:17 ` Jeremy M. Guthrie
@ 2005-01-11 16:40 ` Robert Olsson
2005-01-12 1:27 ` Jeremy M. Guthrie
2005-01-11 17:17 ` Jeremy M. Guthrie
1 sibling, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-11 16:40 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Stephen Hemminger, Jesse Brandeburg, Robert Olsson
About the same load as last time 80 kpps forwarded and droprate of ~200 pps
and 2323 fib lookups/sec. Not so bad for PIII but it maybe tuned better.
Did the e1000 patch cure the problem that interrupts got enabled for
unidirectional traffic? Ring size? I've never seen any win in system
performance w. RX rings larger than 256 at least not in lab.
--ro
Jeremy M. Guthrie writes:
> date ; ifconfig eth3 ; cat /proc/net/softnet_stat ;
> cat /proc/net/stat/rt_cache ; sleep 60 ; date ; ifconfig eth3 ;
> cat /proc/net/softnet_stat ; cat /proc/net/stat/rt_cache
>
> Tue Jan 11 09:12:21 CST 2005
> eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:3519452697 errors:5558592 dropped:5558592
> overruns:4011523 frame:0
> TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:497775695 (474.7 Mb) TX bytes:398 (398.0 b)
> Base address:0x22a0 Memory:eff80000-effa0000
>
> f59427dc 150dc67c 00c562ab 000d2659 00000000 00000000 00000000 00000000
> 00547b5b
> 00038622 00000000 00000065 00000000 00000000 00000000 00000000 00000000
> 0006804f
> entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
> out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
> gc_dst_overflow in_hlist_search out_hlist_search
> 0000f3c2 aeca1565 272c383d 00000000 00000000 000089a8 00005298 00000002
> 001db403 0003054f 00000000 230aa73c 22fe33eb 00000cf0 00000000 b37011dc
> 00957c14
> 0000f3c2 0000b975 00029703 00000000 00000000 000035a5 00000000 00000000
> 00000015 00000083 00000002 000038c8 000038a3 00000000 00000000 0012a566
> 0000014d
>
> Tue Jan 11 09:13:21 CST 2005
> eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:3524302561 errors:5571396 dropped:5571396
> overruns:4022383 frame:0
> TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:3065327250 (2923.3 Mb) TX bytes:398 (398.0 b)
> Base address:0x22a0 Memory:eff80000-effa0000
>
> f5de26f5 150dc67c 00c562b4 000d2659 00000000 00000000 00000000 00000000
> 00547ffd
> 00038632 00000000 00000065 00000000 00000000 00000000 00000000 00000000
> 0006807c
> entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
> out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
> gc_dst_overflow in_hlist_search out_hlist_search
> 0000fc1e af11f07b 272e58d3 00000000 00000000 000089a8 00005298 00000002
> 001db571 00030572 00000000 230cc7f1 23005428 00000cf0 00000000 b4b5b529
> 009583b9
> 0000fc1e 0000b977 00029710 00000000 00000000 000035a7 00000000 00000000
> 00000015 00000083 00000002 000038ca 000038a5 00000000 00000000 0012a5c8
> 0000014d
>
> --
>
> --------------------------------------------------
> Jeremy M. Guthrie jeremy.guthrie@berbee.com
> Senior Network Engineer Phone: 608-298-1061
> Berbee Fax: 608-288-3007
> 5520 Research Park Drive NOC: 608-298-1102
> Madison, WI 53711
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-11 15:17 ` Jeremy M. Guthrie
2005-01-11 16:40 ` Robert Olsson
@ 2005-01-11 17:17 ` Jeremy M. Guthrie
2005-01-11 18:46 ` Robert Olsson
1 sibling, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-11 17:17 UTC (permalink / raw)
To: netdev; +Cc: Stephen Hemminger, Jesse Brandeburg, Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 8723 bytes --]
./rtstat
size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
GC: tot ignored goal_miss ovrf HASH: in_search out_search
63375 86092 2844 0 0 0 0 0 5 1
1610369017 2846 2844 0 0 386039 17
64870 84431 537031471 537492192 0 0 0 0 1 0
0 2848 2846 0 0 386130 10
58707 87889 2715 0 0 0 0 0 7 1 0
2716 2714 0 0 384782 13
60617 89993 2796 0 0 0 0 0 5 0 0
2795 2793 0 0 384432 17
62458 93221 2772 0 0 0 0 0 3 0 0
2772 2770 0 0 399579 9
63990 93792 2712 0 0 0 0 0 6 0 0
2713 2711 0 0 422585 12
65499 94235 2846 0 0 0 0 0 2 0 0
2846 2844 0 0 424341 9
59755 90992 2864 0 0 0 0 0 1 1 0
2865 2863 0 0 385888 13
61702 87681 2798 0 0 0 0 0 6 2 0
2800 2798 0 0 383276 34
63334 90557 2734 0 0 0 0 0 5 1 0
2735 2733 0 0 422121 28
64798 88178 2732 0 0 0 0 0 1 0 0
2732 2730 0 0 410772 11
58574 85910 2669 0 0 0 0 0 3 0 0
2670 2668 0 0 393331 10
60538 87238 2705 0 0 0 0 0 4 0 0
2705 2703 0 0 382161 20
62164 85738 2628 0 0 0 0 0 4 0 0
2628 2626 0 0 373049 25
63759 78752 2638 0 0 0 0 0 2 0 0
2638 2636 0 0 348411 5
65052 79832 2482 0 0 0 0 0 3 0 0
2481 2479 0 0 365871 20
58986 82611 2549 0 0 0 0 0 4 1 0
2550 2548 0 0 366429 24
60845 85340 2620 0 0 1 0 0 2 0 0
2620 2618 0 0 369963 16
62465 83122 2591 0 0 0 0 0 5 0 0
2591 2589 0 0 371261 18
size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
GC: tot ignored goal_miss ovrf HASH: in_search out_search
63998 86141 2640 0 0 0 0 0 4 0
537114631 2640 2638 0 0 386452 15
65294 85989 2527 0 0 0 0 0 5 1
1610369017 2528 2526 0 0 379801 26
59241 84305 2662 0 0 0 0 0 7 0 0
2662 2660 0 0 371925 35
61194 85397 2837 0 0 0 0 0 3 0 0
2837 2835 0 0 374788 17
62908 88133 2688 0 0 1 0 0 4 1 0
2689 2687 0 0 382411 19
64329 88495 2648 0 0 0 0 0 5 1 0
2649 2647 0 0 398763 30
65560 84994 2447 0 0 1 0 0 6 2 0
2449 2447 0 0 394745 33
59677 81478 2568 0 0 0 0 0 5 0 0
2569 2567 0 0 341439 22
61459 81225 2623 0 0 0 0 0 5 0 0
2623 2621 0 0 345996 30
63038 83660 2544 0 0 0 0 0 3 0 0
2544 2542 0 0 381567 21
64494 86321 2620 0 0 0 0 0 4 0 0
2620 2618 0 0 395278 19
65528 88754 2766 0 0 0 0 0 6 0 0
2765 2763 0 0 421075 34
59948 86443 2990 0 0 0 0 0 5 0 0
2990 2988 0 0 373605 29
61860 81912 2786 0 0 0 0 0 5 1 0
2787 2785 0 0 354678 34
63630 80399 2981 0 0 0 0 0 4 0 0
2979 2977 0 0 354983 32
65215 83600 2934 0 0 0 0 0 6 3 0
2937 2935 0 0 371497 41
59322 85583 2821 0 0 0 0 0 7 0 0
2822 2820 0 0 382896 40
61268 82602 2817 0 0 0 0 0 9 2 0
2819 2817 0 0 368078 54
62989 85336 2760 0 0 1 0 0 9 0 0
2760 2758 0 0 394212 39
64387 84861 2603 0 0 0 0 0 7 0 0
2602 2600 0 0 390852 36
size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
GC: tot ignored goal_miss ovrf HASH: in_search out_search
65681 82848 2625 0 0 0 0 0 8 0
537114631 2625 2623 0 0 370769 49
59923 82826 2685 0 0 0 0 0 6 0
1610369017 2685 2683 0 0 344094 41
61700 82705 2706 0 0 0 0 0 7 0 0
2706 2704 0 0 368474 39
63247 91042 2592 0 0 0 0 0 9 1 0
2593 2591 0 0 399393 57
64636 93050 2530 0 0 0 0 0 6 0 0
2530 2528 0 0 405903 24
58301 90662 2554 0 0 0 0 0 5 0 0
2555 2553 0 0 404296 39
60146 88405 2567 0 0 0 0 0 8 1 0
2567 2565 0 0 370227 49
61822 86417 2579 0 0 0 0 0 7 1 0
2580 2578 0 0 366323 43
63351 86517 2594 0 0 1 0 0 9 0 0
2594 2592 0 0 363340 51
64739 87879 2669 0 0 0 0 0 3 0 0
2670 2668 0 0 387580 26
58397 88297 2566 0 0 0 0 0 4 0 0
2566 2564 0 0 396980 34
60352 85039 2707 0 0 0 0 0 6 1 0
2709 2707 0 0 367790 47
62169 87083 2779 0 0 0 0 0 6 0 0
2775 2773 0 0 377163 31
63808 88700 2752 0 0 0 0 0 4 0 0
2751 2749 0 0 392830 28
65189 91242 2725 0 0 0 0 0 5 1 0
2727 2725 0 0 416871 31
59326 84893 2834 0 0 0 0 0 5 1 0
2828 2826 0 0 380811 28
61064 87351 2553 0 0 0 0 0 9 0 0
2552 2550 0 0 382053 37
62637 88761 2551 0 0 0 0 0 5 1 0
2553 2551 0 0 384974 29
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-11 17:17 ` Jeremy M. Guthrie
@ 2005-01-11 18:46 ` Robert Olsson
2005-01-12 1:30 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-11 18:46 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Stephen Hemminger, Jesse Brandeburg, Robert Olsson
Yes rtstats is what we estimated. How about RX ring size and the e1000 fix?
We give up here? Or you want to try another setting of route hash? It's beyond
the scope of this subject. You have to reboot if so.
--ro
Jeremy M. Guthrie writes:
> ./rtstat
> size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
> GC: tot ignored goal_miss ovrf HASH: in_search out_search
> 63375 86092 2844 0 0 0 0 0 5 1
> 1610369017 2846 2844 0 0 386039 17
> 64870 84431 537031471 537492192 0 0 0 0 1 0
> 0 2848 2846 0 0 386130 10
> 58707 87889 2715 0 0 0 0 0 7 1 0
> 2716 2714 0 0 384782 13
> 60617 89993 2796 0 0 0 0 0 5 0 0
> 2795 2793 0 0 384432 17
> 62458 93221 2772 0 0 0 0 0 3 0 0
> 2772 2770 0 0 399579 9
> 63990 93792 2712 0 0 0 0 0 6 0 0
> 2713 2711 0 0 422585 12
> 65499 94235 2846 0 0 0 0 0 2 0 0
> 2846 2844 0 0 424341 9
> 59755 90992 2864 0 0 0 0 0 1 1 0
> 2865 2863 0 0 385888 13
> 61702 87681 2798 0 0 0 0 0 6 2 0
> 2800 2798 0 0 383276 34
> 63334 90557 2734 0 0 0 0 0 5 1 0
> 2735 2733 0 0 422121 28
> 64798 88178 2732 0 0 0 0 0 1 0 0
> 2732 2730 0 0 410772 11
> 58574 85910 2669 0 0 0 0 0 3 0 0
> 2670 2668 0 0 393331 10
> 60538 87238 2705 0 0 0 0 0 4 0 0
> 2705 2703 0 0 382161 20
> 62164 85738 2628 0 0 0 0 0 4 0 0
> 2628 2626 0 0 373049 25
> 63759 78752 2638 0 0 0 0 0 2 0 0
> 2638 2636 0 0 348411 5
> 65052 79832 2482 0 0 0 0 0 3 0 0
> 2481 2479 0 0 365871 20
> 58986 82611 2549 0 0 0 0 0 4 1 0
> 2550 2548 0 0 366429 24
> 60845 85340 2620 0 0 1 0 0 2 0 0
> 2620 2618 0 0 369963 16
> 62465 83122 2591 0 0 0 0 0 5 0 0
> 2591 2589 0 0 371261 18
> size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
> GC: tot ignored goal_miss ovrf HASH: in_search out_search
> 63998 86141 2640 0 0 0 0 0 4 0
> 537114631 2640 2638 0 0 386452 15
> 65294 85989 2527 0 0 0 0 0 5 1
> 1610369017 2528 2526 0 0 379801 26
> 59241 84305 2662 0 0 0 0 0 7 0 0
> 2662 2660 0 0 371925 35
> 61194 85397 2837 0 0 0 0 0 3 0 0
> 2837 2835 0 0 374788 17
> 62908 88133 2688 0 0 1 0 0 4 1 0
> 2689 2687 0 0 382411 19
> 64329 88495 2648 0 0 0 0 0 5 1 0
> 2649 2647 0 0 398763 30
> 65560 84994 2447 0 0 1 0 0 6 2 0
> 2449 2447 0 0 394745 33
> 59677 81478 2568 0 0 0 0 0 5 0 0
> 2569 2567 0 0 341439 22
> 61459 81225 2623 0 0 0 0 0 5 0 0
> 2623 2621 0 0 345996 30
> 63038 83660 2544 0 0 0 0 0 3 0 0
> 2544 2542 0 0 381567 21
> 64494 86321 2620 0 0 0 0 0 4 0 0
> 2620 2618 0 0 395278 19
> 65528 88754 2766 0 0 0 0 0 6 0 0
> 2765 2763 0 0 421075 34
> 59948 86443 2990 0 0 0 0 0 5 0 0
> 2990 2988 0 0 373605 29
> 61860 81912 2786 0 0 0 0 0 5 1 0
> 2787 2785 0 0 354678 34
> 63630 80399 2981 0 0 0 0 0 4 0 0
> 2979 2977 0 0 354983 32
> 65215 83600 2934 0 0 0 0 0 6 3 0
> 2937 2935 0 0 371497 41
> 59322 85583 2821 0 0 0 0 0 7 0 0
> 2822 2820 0 0 382896 40
> 61268 82602 2817 0 0 0 0 0 9 2 0
> 2819 2817 0 0 368078 54
> 62989 85336 2760 0 0 1 0 0 9 0 0
> 2760 2758 0 0 394212 39
> 64387 84861 2603 0 0 0 0 0 7 0 0
> 2602 2600 0 0 390852 36
> size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
> GC: tot ignored goal_miss ovrf HASH: in_search out_search
> 65681 82848 2625 0 0 0 0 0 8 0
> 537114631 2625 2623 0 0 370769 49
> 59923 82826 2685 0 0 0 0 0 6 0
> 1610369017 2685 2683 0 0 344094 41
> 61700 82705 2706 0 0 0 0 0 7 0 0
> 2706 2704 0 0 368474 39
> 63247 91042 2592 0 0 0 0 0 9 1 0
> 2593 2591 0 0 399393 57
> 64636 93050 2530 0 0 0 0 0 6 0 0
> 2530 2528 0 0 405903 24
> 58301 90662 2554 0 0 0 0 0 5 0 0
> 2555 2553 0 0 404296 39
> 60146 88405 2567 0 0 0 0 0 8 1 0
> 2567 2565 0 0 370227 49
> 61822 86417 2579 0 0 0 0 0 7 1 0
> 2580 2578 0 0 366323 43
> 63351 86517 2594 0 0 1 0 0 9 0 0
> 2594 2592 0 0 363340 51
> 64739 87879 2669 0 0 0 0 0 3 0 0
> 2670 2668 0 0 387580 26
> 58397 88297 2566 0 0 0 0 0 4 0 0
> 2566 2564 0 0 396980 34
> 60352 85039 2707 0 0 0 0 0 6 1 0
> 2709 2707 0 0 367790 47
> 62169 87083 2779 0 0 0 0 0 6 0 0
> 2775 2773 0 0 377163 31
> 63808 88700 2752 0 0 0 0 0 4 0 0
> 2751 2749 0 0 392830 28
> 65189 91242 2725 0 0 0 0 0 5 1 0
> 2727 2725 0 0 416871 31
> 59326 84893 2834 0 0 0 0 0 5 1 0
> 2828 2826 0 0 380811 28
> 61064 87351 2553 0 0 0 0 0 9 0 0
> 2552 2550 0 0 382053 37
> 62637 88761 2551 0 0 0 0 0 5 1 0
> 2553 2551 0 0 384974 29
>
> --
>
> --------------------------------------------------
> Jeremy M. Guthrie jeremy.guthrie@berbee.com
> Senior Network Engineer Phone: 608-298-1061
> Berbee Fax: 608-288-3007
> 5520 Research Park Drive NOC: 608-298-1102
> Madison, WI 53711
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-11 16:40 ` Robert Olsson
@ 2005-01-12 1:27 ` Jeremy M. Guthrie
2005-01-12 15:11 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 1:27 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger, Jesse Brandeburg
On Tuesday 11 January 2005 10:40 am, Robert Olsson wrote:
> About the same load as last time 80 kpps forwarded and droprate of ~200 pps
> and 2323 fib lookups/sec. Not so bad for PIII but it maybe tuned better.
>
> Did the e1000 patch cure the problem that interrupts got enabled for
> unidirectional traffic? Ring size? I've never seen any win in system
> performance w. RX rings larger than 256 at least not in lab.
ETH3 Interrupts(calc'd from below): 1479968
ETH2 Interrupts: 261543
Packets RX'd on ETH3: 3892720
Packets dropped on RX on ETH3: 10305
This equates to about a 0.26% drop rate. W/ 256 packet RX ring size I see
about a 0.42% drop rate.
This is using both the newest Intel driver w/ your patch and an increased ring
size of 2048.
Tue Jan 11 19:15:04 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2337377824 errors:14992144 dropped:14992144
overruns:9643826 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3984064976 (3799.5 Mb) TX bytes:398 (398.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
CPU0 CPU1
0: 93627934 353254270 IO-APIC-edge timer
1: 35 507 IO-APIC-edge i8042
7: 0 0 IO-APIC-level ohci_hcd
8: 0 2 IO-APIC-edge rtc
12: 73 145 IO-APIC-edge i8042
14: 120 313 IO-APIC-edge ide0
18: 2158179576 1815 IO-APIC-level eth3
20: 2 2136514988 IO-APIC-level eth2
27: 204201 371301 IO-APIC-level eth0
28: 14585 75320 IO-APIC-level aic7xxx
30: 0 0 IO-APIC-level acpi
NMI: 0 0
LOC: 446922783 446921227
ERR: 0
MIS: 0
Tue Jan 11 19:16:05 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2341270544 errors:15002449 dropped:15002449
overruns:9652393 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1664751812 (1587.6 Mb) TX bytes:398 (398.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
CPU0 CPU1
0: 93639955 353302319 IO-APIC-edge timer
1: 35 507 IO-APIC-edge i8042
7: 0 0 IO-APIC-level ohci_hcd
8: 0 2 IO-APIC-edge rtc
12: 73 145 IO-APIC-edge i8042
14: 120 313 IO-APIC-edge ide0
18: 2159659544 1815 IO-APIC-level eth3
20: 2 2136776531 IO-APIC-level eth2
27: 204245 371369 IO-APIC-level eth0
28: 14593 75343 IO-APIC-level aic7xxx
30: 0 0 IO-APIC-level acpi
NMI: 0 0
LOC: 446982858 446981302
ERR: 0
MIS: 0
--ro
>
> Jeremy M. Guthrie writes:
> > date ; ifconfig eth3 ; cat /proc/net/softnet_stat ;
> > cat /proc/net/stat/rt_cache ; sleep 60 ; date ; ifconfig eth3 ;
> > cat /proc/net/softnet_stat ; cat /proc/net/stat/rt_cache
> >
> > Tue Jan 11 09:12:21 CST 2005
> > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> > inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> > inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:3519452697 errors:5558592 dropped:5558592
> > overruns:4011523 frame:0
> > TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:497775695 (474.7 Mb) TX bytes:398 (398.0 b)
> > Base address:0x22a0 Memory:eff80000-effa0000
> >
> > f59427dc 150dc67c 00c562ab 000d2659 00000000 00000000 00000000 00000000
> > 00547b5b
> > 00038622 00000000 00000065 00000000 00000000 00000000 00000000 00000000
> > 0006804f
> > entries in_hit in_slow_tot in_no_route in_brd in_martian_dst
> > in_martian_src out_hit out_slow_tot out_slow_mc gc_total gc_ignored
> > gc_goal_miss gc_dst_overflow in_hlist_search out_hlist_search
> > 0000f3c2 aeca1565 272c383d 00000000 00000000 000089a8 00005298 00000002
> > 001db403 0003054f 00000000 230aa73c 22fe33eb 00000cf0 00000000 b37011dc
> > 00957c14
> > 0000f3c2 0000b975 00029703 00000000 00000000 000035a5 00000000 00000000
> > 00000015 00000083 00000002 000038c8 000038a3 00000000 00000000 0012a566
> > 0000014d
> >
> > Tue Jan 11 09:13:21 CST 2005
> > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> > inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> > inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:3524302561 errors:5571396 dropped:5571396
> > overruns:4022383 frame:0
> > TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:3065327250 (2923.3 Mb) TX bytes:398 (398.0 b)
> > Base address:0x22a0 Memory:eff80000-effa0000
> >
> > f5de26f5 150dc67c 00c562b4 000d2659 00000000 00000000 00000000 00000000
> > 00547ffd
> > 00038632 00000000 00000065 00000000 00000000 00000000 00000000 00000000
> > 0006807c
> > entries in_hit in_slow_tot in_no_route in_brd in_martian_dst
> > in_martian_src out_hit out_slow_tot out_slow_mc gc_total gc_ignored
> > gc_goal_miss gc_dst_overflow in_hlist_search out_hlist_search
> > 0000fc1e af11f07b 272e58d3 00000000 00000000 000089a8 00005298 00000002
> > 001db571 00030572 00000000 230cc7f1 23005428 00000cf0 00000000 b4b5b529
> > 009583b9
> > 0000fc1e 0000b977 00029710 00000000 00000000 000035a7 00000000 00000000
> > 00000015 00000083 00000002 000038ca 000038a5 00000000 00000000 0012a5c8
> > 0000014d
> >
> > --
> >
> > --------------------------------------------------
> > Jeremy M. Guthrie jeremy.guthrie@berbee.com
> > Senior Network Engineer Phone: 608-298-1061
> > Berbee Fax: 608-288-3007
> > 5520 Research Park Drive NOC: 608-298-1102
> > Madison, WI 53711
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-11 18:46 ` Robert Olsson
@ 2005-01-12 1:30 ` Jeremy M. Guthrie
2005-01-12 16:02 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 1:30 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger, Jesse Brandeburg
On Tuesday 11 January 2005 12:46 pm, Robert Olsson wrote:
> We give up here? Or you want to try another setting of route hash? It's
> beyond the scope of this subject. You have to reboot if so.
I am up for pushing it if you do not think it is a waste of time. Based on
what I am seeing it looks like I just need a faster CPU to do the work. My
goal would be to hit zero dropped packets w/ 10-15% CPU to spare but I fail
to see how that will happen on this box. Do you concur that it would be
highly unlikely I would be able to get that kind of performance increase?
> Jeremy M. Guthrie writes:
> > ./rtstat
> > size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot
> > mc GC: tot ignored goal_miss ovrf HASH: in_search out_search
> > 63375 86092 2844 0 0 0 0 0 5 1
> > 1610369017 2846 2844 0 0 386039 17
> > 64870 84431 537031471 537492192 0 0 0 0 1
> > 0 0 2848 2846 0 0 386130 10
> > 58707 87889 2715 0 0 0 0 0 7 1
> > 0 2716 2714 0 0 384782 13
> > 60617 89993 2796 0 0 0 0 0 5 0
> > 0 2795 2793 0 0 384432 17
> > 62458 93221 2772 0 0 0 0 0 3 0
> > 0 2772 2770 0 0 399579 9
> > 63990 93792 2712 0 0 0 0 0 6 0
> > 0 2713 2711 0 0 422585 12
> > 65499 94235 2846 0 0 0 0 0 2 0
> > 0 2846 2844 0 0 424341 9
> > 59755 90992 2864 0 0 0 0 0 1 1
> > 0 2865 2863 0 0 385888 13
> > 61702 87681 2798 0 0 0 0 0 6 2
> > 0 2800 2798 0 0 383276 34
> > 63334 90557 2734 0 0 0 0 0 5 1
> > 0 2735 2733 0 0 422121 28
> > 64798 88178 2732 0 0 0 0 0 1 0
> > 0 2732 2730 0 0 410772 11
> > 58574 85910 2669 0 0 0 0 0 3 0
> > 0 2670 2668 0 0 393331 10
> > 60538 87238 2705 0 0 0 0 0 4 0
> > 0 2705 2703 0 0 382161 20
> > 62164 85738 2628 0 0 0 0 0 4 0
> > 0 2628 2626 0 0 373049 25
> > 63759 78752 2638 0 0 0 0 0 2 0
> > 0 2638 2636 0 0 348411 5
> > 65052 79832 2482 0 0 0 0 0 3 0
> > 0 2481 2479 0 0 365871 20
> > 58986 82611 2549 0 0 0 0 0 4 1
> > 0 2550 2548 0 0 366429 24
> > 60845 85340 2620 0 0 1 0 0 2 0
> > 0 2620 2618 0 0 369963 16
> > 62465 83122 2591 0 0 0 0 0 5 0
> > 0 2591 2589 0 0 371261 18
> > size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot
> > mc GC: tot ignored goal_miss ovrf HASH: in_search out_search
> > 63998 86141 2640 0 0 0 0 0 4 0
> > 537114631 2640 2638 0 0 386452 15
> > 65294 85989 2527 0 0 0 0 0 5 1
> > 1610369017 2528 2526 0 0 379801 26
> > 59241 84305 2662 0 0 0 0 0 7 0
> > 0 2662 2660 0 0 371925 35
> > 61194 85397 2837 0 0 0 0 0 3 0
> > 0 2837 2835 0 0 374788 17
> > 62908 88133 2688 0 0 1 0 0 4 1
> > 0 2689 2687 0 0 382411 19
> > 64329 88495 2648 0 0 0 0 0 5 1
> > 0 2649 2647 0 0 398763 30
> > 65560 84994 2447 0 0 1 0 0 6 2
> > 0 2449 2447 0 0 394745 33
> > 59677 81478 2568 0 0 0 0 0 5 0
> > 0 2569 2567 0 0 341439 22
> > 61459 81225 2623 0 0 0 0 0 5 0
> > 0 2623 2621 0 0 345996 30
> > 63038 83660 2544 0 0 0 0 0 3 0
> > 0 2544 2542 0 0 381567 21
> > 64494 86321 2620 0 0 0 0 0 4 0
> > 0 2620 2618 0 0 395278 19
> > 65528 88754 2766 0 0 0 0 0 6 0
> > 0 2765 2763 0 0 421075 34
> > 59948 86443 2990 0 0 0 0 0 5 0
> > 0 2990 2988 0 0 373605 29
> > 61860 81912 2786 0 0 0 0 0 5 1
> > 0 2787 2785 0 0 354678 34
> > 63630 80399 2981 0 0 0 0 0 4 0
> > 0 2979 2977 0 0 354983 32
> > 65215 83600 2934 0 0 0 0 0 6 3
> > 0 2937 2935 0 0 371497 41
> > 59322 85583 2821 0 0 0 0 0 7 0
> > 0 2822 2820 0 0 382896 40
> > 61268 82602 2817 0 0 0 0 0 9 2
> > 0 2819 2817 0 0 368078 54
> > 62989 85336 2760 0 0 1 0 0 9 0
> > 0 2760 2758 0 0 394212 39
> > 64387 84861 2603 0 0 0 0 0 7 0
> > 0 2602 2600 0 0 390852 36
> > size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot
> > mc GC: tot ignored goal_miss ovrf HASH: in_search out_search
> > 65681 82848 2625 0 0 0 0 0 8 0
> > 537114631 2625 2623 0 0 370769 49
> > 59923 82826 2685 0 0 0 0 0 6 0
> > 1610369017 2685 2683 0 0 344094 41
> > 61700 82705 2706 0 0 0 0 0 7 0
> > 0 2706 2704 0 0 368474 39
> > 63247 91042 2592 0 0 0 0 0 9 1
> > 0 2593 2591 0 0 399393 57
> > 64636 93050 2530 0 0 0 0 0 6 0
> > 0 2530 2528 0 0 405903 24
> > 58301 90662 2554 0 0 0 0 0 5 0
> > 0 2555 2553 0 0 404296 39
> > 60146 88405 2567 0 0 0 0 0 8 1
> > 0 2567 2565 0 0 370227 49
> > 61822 86417 2579 0 0 0 0 0 7 1
> > 0 2580 2578 0 0 366323 43
> > 63351 86517 2594 0 0 1 0 0 9 0
> > 0 2594 2592 0 0 363340 51
> > 64739 87879 2669 0 0 0 0 0 3 0
> > 0 2670 2668 0 0 387580 26
> > 58397 88297 2566 0 0 0 0 0 4 0
> > 0 2566 2564 0 0 396980 34
> > 60352 85039 2707 0 0 0 0 0 6 1
> > 0 2709 2707 0 0 367790 47
> > 62169 87083 2779 0 0 0 0 0 6 0
> > 0 2775 2773 0 0 377163 31
> > 63808 88700 2752 0 0 0 0 0 4 0
> > 0 2751 2749 0 0 392830 28
> > 65189 91242 2725 0 0 0 0 0 5 1
> > 0 2727 2725 0 0 416871 31
> > 59326 84893 2834 0 0 0 0 0 5 1
> > 0 2828 2826 0 0 380811 28
> > 61064 87351 2553 0 0 0 0 0 9 0
> > 0 2552 2550 0 0 382053 37
> > 62637 88761 2551 0 0 0 0 0 5 1
> > 0 2553 2551 0 0 384974 29
> >
> > --
> >
> > --------------------------------------------------
> > Jeremy M. Guthrie jeremy.guthrie@berbee.com
> > Senior Network Engineer Phone: 608-298-1061
> > Berbee Fax: 608-288-3007
> > 5520 Research Park Drive NOC: 608-298-1102
> > Madison, WI 53711
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 1:27 ` Jeremy M. Guthrie
@ 2005-01-12 15:11 ` Robert Olsson
2005-01-12 16:24 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-12 15:11 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger, Jesse Brandeburg
Jeremy M. Guthrie writes:
> > Did the e1000 patch cure the problem that interrupts got enabled for
> > unidirectional traffic? Ring size? I've never seen any win in system
> > performance w. RX rings larger than 256 at least not in lab.
> ETH3 Interrupts(calc'd from below): 1479968
> ETH2 Interrupts: 261543
> Packets RX'd on ETH3: 3892720
> Packets dropped on RX on ETH3: 10305
Very strange...
eth3 is bound to CPU0 which in turn has all packet load... If we were
to believe your CPU0 was saturated (due to the drops). We should see no
(RX) interrupts on eth3. But there is a lot... one irq per every three
packet. Why?
Can you investigate? e1000 has problem like this w. unidirectional traffic
w/o the patch I sent.
Or is your traffic so extremely bursty. No?
--ro
> This equates to about a 0.26% drop rate. W/ 256 packet RX ring size I see
> about a 0.42% drop rate.
>
> This is using both the newest Intel driver w/ your patch and an increased ring
> size of 2048.
>
> Tue Jan 11 19:15:04 CST 2005
> eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:2337377824 errors:14992144 dropped:14992144
> overruns:9643826 frame:0
> TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:3984064976 (3799.5 Mb) TX bytes:398 (398.0 b)
> Base address:0x22a0 Memory:eff80000-effa0000
>
> CPU0 CPU1
> 0: 93627934 353254270 IO-APIC-edge timer
> 1: 35 507 IO-APIC-edge i8042
> 7: 0 0 IO-APIC-level ohci_hcd
> 8: 0 2 IO-APIC-edge rtc
> 12: 73 145 IO-APIC-edge i8042
> 14: 120 313 IO-APIC-edge ide0
> 18: 2158179576 1815 IO-APIC-level eth3
> 20: 2 2136514988 IO-APIC-level eth2
> 27: 204201 371301 IO-APIC-level eth0
> 28: 14585 75320 IO-APIC-level aic7xxx
> 30: 0 0 IO-APIC-level acpi
> NMI: 0 0
> LOC: 446922783 446921227
> ERR: 0
> MIS: 0
>
> Tue Jan 11 19:16:05 CST 2005
> eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:2341270544 errors:15002449 dropped:15002449
> overruns:9652393 frame:0
> TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:1664751812 (1587.6 Mb) TX bytes:398 (398.0 b)
> Base address:0x22a0 Memory:eff80000-effa0000
>
> CPU0 CPU1
> 0: 93639955 353302319 IO-APIC-edge timer
> 1: 35 507 IO-APIC-edge i8042
> 7: 0 0 IO-APIC-level ohci_hcd
> 8: 0 2 IO-APIC-edge rtc
> 12: 73 145 IO-APIC-edge i8042
> 14: 120 313 IO-APIC-edge ide0
> 18: 2159659544 1815 IO-APIC-level eth3
> 20: 2 2136776531 IO-APIC-level eth2
> 27: 204245 371369 IO-APIC-level eth0
> 28: 14593 75343 IO-APIC-level aic7xxx
> 30: 0 0 IO-APIC-level acpi
> NMI: 0 0
> LOC: 446982858 446981302
> ERR: 0
> MIS: 0
>
> --ro
> >
> > Jeremy M. Guthrie writes:
> > > date ; ifconfig eth3 ; cat /proc/net/softnet_stat ;
> > > cat /proc/net/stat/rt_cache ; sleep 60 ; date ; ifconfig eth3 ;
> > > cat /proc/net/softnet_stat ; cat /proc/net/stat/rt_cache
> > >
> > > Tue Jan 11 09:12:21 CST 2005
> > > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> > > inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> > > inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > > RX packets:3519452697 errors:5558592 dropped:5558592
> > > overruns:4011523 frame:0
> > > TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> > > collisions:0 txqueuelen:1000
> > > RX bytes:497775695 (474.7 Mb) TX bytes:398 (398.0 b)
> > > Base address:0x22a0 Memory:eff80000-effa0000
> > >
> > > f59427dc 150dc67c 00c562ab 000d2659 00000000 00000000 00000000 00000000
> > > 00547b5b
> > > 00038622 00000000 00000065 00000000 00000000 00000000 00000000 00000000
> > > 0006804f
> > > entries in_hit in_slow_tot in_no_route in_brd in_martian_dst
> > > in_martian_src out_hit out_slow_tot out_slow_mc gc_total gc_ignored
> > > gc_goal_miss gc_dst_overflow in_hlist_search out_hlist_search
> > > 0000f3c2 aeca1565 272c383d 00000000 00000000 000089a8 00005298 00000002
> > > 001db403 0003054f 00000000 230aa73c 22fe33eb 00000cf0 00000000 b37011dc
> > > 00957c14
> > > 0000f3c2 0000b975 00029703 00000000 00000000 000035a5 00000000 00000000
> > > 00000015 00000083 00000002 000038c8 000038a3 00000000 00000000 0012a566
> > > 0000014d
> > >
> > > Tue Jan 11 09:13:21 CST 2005
> > > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> > > inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> > > inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > > RX packets:3524302561 errors:5571396 dropped:5571396
> > > overruns:4022383 frame:0
> > > TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> > > collisions:0 txqueuelen:1000
> > > RX bytes:3065327250 (2923.3 Mb) TX bytes:398 (398.0 b)
> > > Base address:0x22a0 Memory:eff80000-effa0000
> > >
> > > f5de26f5 150dc67c 00c562b4 000d2659 00000000 00000000 00000000 00000000
> > > 00547ffd
> > > 00038632 00000000 00000065 00000000 00000000 00000000 00000000 00000000
> > > 0006807c
> > > entries in_hit in_slow_tot in_no_route in_brd in_martian_dst
> > > in_martian_src out_hit out_slow_tot out_slow_mc gc_total gc_ignored
> > > gc_goal_miss gc_dst_overflow in_hlist_search out_hlist_search
> > > 0000fc1e af11f07b 272e58d3 00000000 00000000 000089a8 00005298 00000002
> > > 001db571 00030572 00000000 230cc7f1 23005428 00000cf0 00000000 b4b5b529
> > > 009583b9
> > > 0000fc1e 0000b977 00029710 00000000 00000000 000035a7 00000000 00000000
> > > 00000015 00000083 00000002 000038ca 000038a5 00000000 00000000 0012a5c8
> > > 0000014d
> > >
> > > --
> > >
> > > --------------------------------------------------
> > > Jeremy M. Guthrie jeremy.guthrie@berbee.com
> > > Senior Network Engineer Phone: 608-298-1061
> > > Berbee Fax: 608-288-3007
> > > 5520 Research Park Drive NOC: 608-298-1102
> > > Madison, WI 53711
>
> --
>
> --------------------------------------------------
> Jeremy M. Guthrie jeremy.guthrie@berbee.com
> Senior Network Engineer Phone: 608-298-1061
> Berbee Fax: 608-288-3007
> 5520 Research Park Drive NOC: 608-298-1102
> Madison, WI 53711
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 1:30 ` Jeremy M. Guthrie
@ 2005-01-12 16:02 ` Robert Olsson
0 siblings, 0 replies; 88+ messages in thread
From: Robert Olsson @ 2005-01-12 16:02 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger, Jesse Brandeburg
Jeremy M. Guthrie writes:
> I am up for pushing it if you do not think it is a waste of time. Based on
> what I am seeing it looks like I just need a faster CPU to do the work. My
> goal would be to hit zero dropped packets w/ 10-15% CPU to spare but I fail
> to see how that will happen on this box. Do you concur that it would be
> highly unlikely I would be able to get that kind of performance increase?
Well we can try to decrease some of the linear search in the route hash
once we understand why we see all the RX interrupts.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 15:11 ` Robert Olsson
@ 2005-01-12 16:24 ` Jeremy M. Guthrie
2005-01-12 19:27 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 16:24 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger, Jesse Brandeburg
[-- Attachment #1: Type: text/plain, Size: 4775 bytes --]
On Wednesday 12 January 2005 09:11 am, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > > Did the e1000 patch cure the problem that interrupts got enabled for
> > > unidirectional traffic? Ring size? I've never seen any win in system
> > > performance w. RX rings larger than 256 at least not in lab.
> >
> > ETH3 Interrupts(calc'd from below): 1479968
> > ETH2 Interrupts: 261543
> > Packets RX'd on ETH3: 3892720
> > Packets dropped on RX on ETH3: 10305
>
> Very strange...
>
> eth3 is bound to CPU0 which in turn has all packet load... If we were
> to believe your CPU0 was saturated (due to the drops). We should see no
> (RX) interrupts on eth3. But there is a lot... one irq per every three
> packet. Why?
I have no idea why it would be doing this.
> Can you investigate? e1000 has problem like this w. unidirectional traffic
> w/o the patch I sent.
This appears to be where this problem is getting beyond my expertise. I
verified NAPI is turned on. I also verified the patch is in place. I am
open to suggestions but otherwise I am not the worlds best coder.
> Or is your traffic so extremely bursty. No?
By the nature of our business it can be very bursty. We just have so many
sources/destinations for traffic that our traffic is generally pretty bursty.
We might be running 300-400 mbps with spikes of 40-100 mbps.
> --ro
>
> > This equates to about a 0.26% drop rate. W/ 256 packet RX ring size I
> > see about a 0.42% drop rate.
> >
> > This is using both the newest Intel driver w/ your patch and an
> > increased ring size of 2048.
> >
> > Tue Jan 11 19:15:04 CST 2005
> > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> > inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> > inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:2337377824 errors:14992144 dropped:14992144
> > overruns:9643826 frame:0
> > TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:3984064976 (3799.5 Mb) TX bytes:398 (398.0 b)
> > Base address:0x22a0 Memory:eff80000-effa0000
> >
> > CPU0 CPU1
> > 0: 93627934 353254270 IO-APIC-edge timer
> > 1: 35 507 IO-APIC-edge i8042
> > 7: 0 0 IO-APIC-level ohci_hcd
> > 8: 0 2 IO-APIC-edge rtc
> > 12: 73 145 IO-APIC-edge i8042
> > 14: 120 313 IO-APIC-edge ide0
> > 18: 2158179576 1815 IO-APIC-level eth3
> > 20: 2 2136514988 IO-APIC-level eth2
> > 27: 204201 371301 IO-APIC-level eth0
> > 28: 14585 75320 IO-APIC-level aic7xxx
> > 30: 0 0 IO-APIC-level acpi
> > NMI: 0 0
> > LOC: 446922783 446921227
> > ERR: 0
> > MIS: 0
> >
> > Tue Jan 11 19:16:05 CST 2005
> > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> > inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> > inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:2341270544 errors:15002449 dropped:15002449
> > overruns:9652393 frame:0
> > TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:1664751812 (1587.6 Mb) TX bytes:398 (398.0 b)
> > Base address:0x22a0 Memory:eff80000-effa0000
> >
> > CPU0 CPU1
> > 0: 93639955 353302319 IO-APIC-edge timer
> > 1: 35 507 IO-APIC-edge i8042
> > 7: 0 0 IO-APIC-level ohci_hcd
> > 8: 0 2 IO-APIC-edge rtc
> > 12: 73 145 IO-APIC-edge i8042
> > 14: 120 313 IO-APIC-edge ide0
> > 18: 2159659544 1815 IO-APIC-level eth3
> > 20: 2 2136776531 IO-APIC-level eth2
> > 27: 204245 371369 IO-APIC-level eth0
> > 28: 14593 75343 IO-APIC-level aic7xxx
> > 30: 0 0 IO-APIC-level acpi
> > NMI: 0 0
> > LOC: 446982858 446981302
> > ERR: 0
> > MIS: 0
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 16:24 ` Jeremy M. Guthrie
@ 2005-01-12 19:27 ` Robert Olsson
2005-01-12 20:11 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-12 19:27 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger, Jesse Brandeburg
Jeremy M. Guthrie writes:
> > > ETH3 Interrupts(calc'd from below): 1479968
> > Very strange...
> >
> > eth3 is bound to CPU0 which in turn has all packet load... If we were
> > to believe your CPU0 was saturated (due to the drops). We should see no
> > (RX) interrupts on eth3. But there is a lot... one irq per every three
> > packet. Why?
> I have no idea why it would be doing this.
Huh seems you didn't add the patch I sent. Below is diff from my editor to your
e1000_main.c
--ro
--- e1000_main.c.jmg 2005-01-12 20:14:08.324168072 +0100
+++ e1000_main.c 2005-01-12 20:17:24.777302656 +0100
@@ -2264,14 +2264,13 @@
netdev->quota -= work_done;
/* if no Rx and Tx cleanup work was done, exit the polling mode */
- if(!tx_cleaned || (work_done < work_to_do) ||
- !netif_running(netdev)) {
+ if( (!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
quit_polling: netif_rx_complete(netdev);
e1000_irq_enable(adapter);
return 0;
}
- return (work_done >= work_to_do);
+ return 1;
}
#endif
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 19:27 ` Robert Olsson
@ 2005-01-12 20:11 ` Jeremy M. Guthrie
2005-01-12 20:21 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 20:11 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger, Jesse Brandeburg
[-- Attachment #1: Type: text/plain, Size: 4141 bytes --]
Latest numbers after your patch Robert.
Wed Jan 12 14:05:36 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:21951496 errors:2412189 dropped:2412189 overruns:377090
frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3455362966 (3295.2 Mb) TX bytes:398 (398.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
95b83157 150dc67c 00c734a1 000d2659 00000000 00000000 00000000 00000000
0073cee4
00044494 00000000 00000075 00000000 00000000 00000000 00000000 00000000
00097c77
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
0000f8cd 41403122 34c30e49 00000000 00000000 0000b0dd 00006a38 00000002
0027279d 0004147b 00000000 2f528a5f 2f42efcf 0000104f 00000000 81d262df
00c5d75c
0000f8cd 0000e332 0003238f 00000000 00000000 00004263 00000000 00000000
0000001e 000000c0 00000002 00004650 00004626 00000000 00000000 0016b879
0000024c
Wed Jan 12 14:06:36 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:28053281 errors:2899869 dropped:2899869 overruns:427243
frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2607094069 (2486.3 Mb) TX bytes:398 (398.0 b)
Base address:0x22a0 Memory:eff80000-effa0000
96154d20 150dc67c 00c78bc2 000d2659 00000000 00000000 00000000 00000000
0073d31d
00044499 00000000 00000075 00000000 00000000 00000000 00000000 00000000
00097d2d
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
0000fc6a 419ad196 34c586ba 00000000 00000000 0000b0e0 00006a40 00000002
002729c5 000414cb 00000000 2f5502ff 2f4567f7 0000104f 00000000 83790467
00c5e33c
0000fc6a 0000e333 00032393 00000000 00000000 00004263 00000000 00000000
0000001e 000000c0 00000002 00004650 00004626 00000000 00000000 0016b898
0000024c
On Wednesday 12 January 2005 01:27 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > > > ETH3 Interrupts(calc'd from below): 1479968
> > >
> > > Very strange...
> > >
> > > eth3 is bound to CPU0 which in turn has all packet load... If we were
> > > to believe your CPU0 was saturated (due to the drops). We should see
> > > no (RX) interrupts on eth3. But there is a lot... one irq per every
> > > three packet. Why?
> >
> > I have no idea why it would be doing this.
>
> Huh seems you didn't add the patch I sent. Below is diff from my editor to
> your e1000_main.c
>
> --ro
>
>
> --- e1000_main.c.jmg 2005-01-12 20:14:08.324168072 +0100
> +++ e1000_main.c 2005-01-12 20:17:24.777302656 +0100
> @@ -2264,14 +2264,13 @@
> netdev->quota -= work_done;
>
> /* if no Rx and Tx cleanup work was done, exit the polling mode */
> - if(!tx_cleaned || (work_done < work_to_do) ||
> - !netif_running(netdev)) {
> + if( (!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
> quit_polling: netif_rx_complete(netdev);
> e1000_irq_enable(adapter);
> return 0;
> }
>
> - return (work_done >= work_to_do);
> + return 1;
> }
>
> #endif
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 20:11 ` Jeremy M. Guthrie
@ 2005-01-12 20:21 ` Robert Olsson
2005-01-12 20:30 ` Jeremy M. Guthrie
2005-01-12 20:45 ` Jeremy M. Guthrie
0 siblings, 2 replies; 88+ messages in thread
From: Robert Olsson @ 2005-01-12 20:21 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger, Jesse Brandeburg
Jeremy M. Guthrie writes:
>
> Latest numbers after your patch Robert.
Did the RX interrupts go down?
--ro
> Wed Jan 12 14:05:36 CST 2005
> eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:21951496 errors:2412189 dropped:2412189 overruns:377090
> frame:0
> TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:3455362966 (3295.2 Mb) TX bytes:398 (398.0 b)
> Base address:0x22a0 Memory:eff80000-effa0000
>
> 95b83157 150dc67c 00c734a1 000d2659 00000000 00000000 00000000 00000000
> 0073cee4
> 00044494 00000000 00000075 00000000 00000000 00000000 00000000 00000000
> 00097c77
> entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
> out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
> gc_dst_overflow in_hlist_search out_hlist_search
> 0000f8cd 41403122 34c30e49 00000000 00000000 0000b0dd 00006a38 00000002
> 0027279d 0004147b 00000000 2f528a5f 2f42efcf 0000104f 00000000 81d262df
> 00c5d75c
> 0000f8cd 0000e332 0003238f 00000000 00000000 00004263 00000000 00000000
> 0000001e 000000c0 00000002 00004650 00004626 00000000 00000000 0016b879
> 0000024c
>
>
>
>
>
> Wed Jan 12 14:06:36 CST 2005
> eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:28053281 errors:2899869 dropped:2899869 overruns:427243
> frame:0
> TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:2607094069 (2486.3 Mb) TX bytes:398 (398.0 b)
> Base address:0x22a0 Memory:eff80000-effa0000
>
> 96154d20 150dc67c 00c78bc2 000d2659 00000000 00000000 00000000 00000000
> 0073d31d
> 00044499 00000000 00000075 00000000 00000000 00000000 00000000 00000000
> 00097d2d
> entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
> out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
> gc_dst_overflow in_hlist_search out_hlist_search
> 0000fc6a 419ad196 34c586ba 00000000 00000000 0000b0e0 00006a40 00000002
> 002729c5 000414cb 00000000 2f5502ff 2f4567f7 0000104f 00000000 83790467
> 00c5e33c
> 0000fc6a 0000e333 00032393 00000000 00000000 00004263 00000000 00000000
> 0000001e 000000c0 00000002 00004650 00004626 00000000 00000000 0016b898
> 0000024c
>
>
> On Wednesday 12 January 2005 01:27 pm, Robert Olsson wrote:
> > Jeremy M. Guthrie writes:
> > > > > ETH3 Interrupts(calc'd from below): 1479968
> > > >
> > > > Very strange...
> > > >
> > > > eth3 is bound to CPU0 which in turn has all packet load... If we were
> > > > to believe your CPU0 was saturated (due to the drops). We should see
> > > > no (RX) interrupts on eth3. But there is a lot... one irq per every
> > > > three packet. Why?
> > >
> > > I have no idea why it would be doing this.
> >
> > Huh seems you didn't add the patch I sent. Below is diff from my editor to
> > your e1000_main.c
> >
> > --ro
> >
> >
> > --- e1000_main.c.jmg 2005-01-12 20:14:08.324168072 +0100
> > +++ e1000_main.c 2005-01-12 20:17:24.777302656 +0100
> > @@ -2264,14 +2264,13 @@
> > netdev->quota -= work_done;
> >
> > /* if no Rx and Tx cleanup work was done, exit the polling mode */
> > - if(!tx_cleaned || (work_done < work_to_do) ||
> > - !netif_running(netdev)) {
> > + if( (!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
> > quit_polling: netif_rx_complete(netdev);
> > e1000_irq_enable(adapter);
> > return 0;
> > }
> >
> > - return (work_done >= work_to_do);
> > + return 1;
> > }
> >
> > #endif
>
> --
>
> --------------------------------------------------
> Jeremy M. Guthrie jeremy.guthrie@berbee.com
> Senior Network Engineer Phone: 608-298-1061
> Berbee Fax: 608-288-3007
> 5520 Research Park Drive NOC: 608-298-1102
> Madison, WI 53711
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 20:21 ` Robert Olsson
@ 2005-01-12 20:30 ` Jeremy M. Guthrie
2005-01-12 20:45 ` Jeremy M. Guthrie
1 sibling, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 20:30 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger, Jesse Brandeburg
[-- Attachment #1: Type: text/plain, Size: 7120 bytes --]
Sorry, here is the latest. BTW, it is VERY sluggish now.
Wed Jan 12 14:25:03 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
RX packets:32030222 errors:105026820 dropped:105026820
overruns:100748696 frame:0
9651ffeb 150dc67c 00c814c1 000d2659 00000000 00000000 00000000 00000000
0073d527
00044681 00000000 000000f8 00000000 00000000 00000000 00000000 00000000
00097d42
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
00020000 41c25a6f 34daaca8 00000000 00000000 0000b11d 00006a40 00000002
00272af5 00041525 00000000 2f6a27e5 2f4b712d 000f2bd8 000f1b0e 84036c96
00c5e8a8
00020000 0000e336 0003256f 00000000 00000000 00004292 00000000 00000000
0000001e 000000c0 00000002 00004762 0000462a 0000010e 0000010b 0016b898
0000024c
CPU0 CPU1
18: 3586173518 1815 IO-APIC-level eth3
20: 2 2464382507 IO-APIC-level eth2
Wed Jan 12 14:26:03 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
RX packets:32097966 errors:110579564 dropped:110579564
overruns:106248073 frame:0
965308aa 150dc67c 00c818e4 000d2659 00000000 00000000 00000000 00000000
0073d527
0004468b 00000000 000000f8 00000000 00000000 00000000 00000000 00000000
00097d42
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
00020000 41c2602e 34dbaf9f 00000000 00000000 0000b11e 00006a40 00000002
00272af5 00041527 00000000 2f6b2ad8 2f4b9a88 00100571 000ff4a4 84036c9e
00c5e8a8
00020000 0000e336 00032578 00000000 00000000 00004292 00000000 00000000
0000001e 000000c0 00000002 00004762 0000462a 0000010e 0000010b 0016b898
0000024c
CPU0 CPU1
18: 3586173518 1815 IO-APIC-level eth3
20: 2 2464387985 IO-APIC-level eth2
On Wednesday 12 January 2005 02:21 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > Latest numbers after your patch Robert.
>
> Did the RX interrupts go down?
>
> --ro
>
> > Wed Jan 12 14:05:36 CST 2005
> > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> > inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> > inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:21951496 errors:2412189 dropped:2412189
> > overruns:377090 frame:0
> > TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:3455362966 (3295.2 Mb) TX bytes:398 (398.0 b)
> > Base address:0x22a0 Memory:eff80000-effa0000
> >
> > 95b83157 150dc67c 00c734a1 000d2659 00000000 00000000 00000000 00000000
> > 0073cee4
> > 00044494 00000000 00000075 00000000 00000000 00000000 00000000 00000000
> > 00097c77
> > entries in_hit in_slow_tot in_no_route in_brd in_martian_dst
> > in_martian_src out_hit out_slow_tot out_slow_mc gc_total gc_ignored
> > gc_goal_miss gc_dst_overflow in_hlist_search out_hlist_search
> > 0000f8cd 41403122 34c30e49 00000000 00000000 0000b0dd 00006a38 00000002
> > 0027279d 0004147b 00000000 2f528a5f 2f42efcf 0000104f 00000000 81d262df
> > 00c5d75c
> > 0000f8cd 0000e332 0003238f 00000000 00000000 00004263 00000000 00000000
> > 0000001e 000000c0 00000002 00004650 00004626 00000000 00000000 0016b879
> > 0000024c
> >
> >
> >
> >
> >
> > Wed Jan 12 14:06:36 CST 2005
> > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> > inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> > inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:28053281 errors:2899869 dropped:2899869
> > overruns:427243 frame:0
> > TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:2607094069 (2486.3 Mb) TX bytes:398 (398.0 b)
> > Base address:0x22a0 Memory:eff80000-effa0000
> >
> > 96154d20 150dc67c 00c78bc2 000d2659 00000000 00000000 00000000 00000000
> > 0073d31d
> > 00044499 00000000 00000075 00000000 00000000 00000000 00000000 00000000
> > 00097d2d
> > entries in_hit in_slow_tot in_no_route in_brd in_martian_dst
> > in_martian_src out_hit out_slow_tot out_slow_mc gc_total gc_ignored
> > gc_goal_miss gc_dst_overflow in_hlist_search out_hlist_search
> > 0000fc6a 419ad196 34c586ba 00000000 00000000 0000b0e0 00006a40 00000002
> > 002729c5 000414cb 00000000 2f5502ff 2f4567f7 0000104f 00000000 83790467
> > 00c5e33c
> > 0000fc6a 0000e333 00032393 00000000 00000000 00004263 00000000 00000000
> > 0000001e 000000c0 00000002 00004650 00004626 00000000 00000000 0016b898
> > 0000024c
> >
> > On Wednesday 12 January 2005 01:27 pm, Robert Olsson wrote:
> > > Jeremy M. Guthrie writes:
> > > > > > ETH3 Interrupts(calc'd from below): 1479968
> > > > >
> > > > > Very strange...
> > > > >
> > > > > eth3 is bound to CPU0 which in turn has all packet load... If we
> > > > > were to believe your CPU0 was saturated (due to the drops). We
> > > > > should see no (RX) interrupts on eth3. But there is a lot... one
> > > > > irq per every three packet. Why?
> > > >
> > > > I have no idea why it would be doing this.
> > >
> > > Huh seems you didn't add the patch I sent. Below is diff from my
> > > editor to your e1000_main.c
> > >
> > > --ro
> > >
> > >
> > > --- e1000_main.c.jmg 2005-01-12 20:14:08.324168072 +0100
> > > +++ e1000_main.c 2005-01-12 20:17:24.777302656 +0100
> > > @@ -2264,14 +2264,13 @@
> > > netdev->quota -= work_done;
> > >
> > > /* if no Rx and Tx cleanup work was done, exit the polling mode */
> > > - if(!tx_cleaned || (work_done < work_to_do) ||
> > > - !netif_running(netdev)) {
> > > + if( (!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
> > > quit_polling: netif_rx_complete(netdev);
> > > e1000_irq_enable(adapter);
> > > return 0;
> > > }
> > >
> > > - return (work_done >= work_to_do);
> > > + return 1;
> > > }
> > >
> > > #endif
> >
> > --
> >
> > --------------------------------------------------
> > Jeremy M. Guthrie jeremy.guthrie@berbee.com
> > Senior Network Engineer Phone: 608-298-1061
> > Berbee Fax: 608-288-3007
> > 5520 Research Park Drive NOC: 608-298-1102
> > Madison, WI 53711
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 20:21 ` Robert Olsson
2005-01-12 20:30 ` Jeremy M. Guthrie
@ 2005-01-12 20:45 ` Jeremy M. Guthrie
2005-01-12 22:02 ` Robert Olsson
2005-01-12 22:05 ` Jeremy M. Guthrie
1 sibling, 2 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 20:45 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger, Jesse Brandeburg
[-- Attachment #1: Type: text/plain, Size: 5110 bytes --]
My throughput dropped from 500 mbps to 8mbps. 8(
On Wednesday 12 January 2005 02:21 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > Latest numbers after your patch Robert.
>
> Did the RX interrupts go down?
>
> --ro
>
> > Wed Jan 12 14:05:36 CST 2005
> > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> > inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> > inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:21951496 errors:2412189 dropped:2412189
> > overruns:377090 frame:0
> > TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:3455362966 (3295.2 Mb) TX bytes:398 (398.0 b)
> > Base address:0x22a0 Memory:eff80000-effa0000
> >
> > 95b83157 150dc67c 00c734a1 000d2659 00000000 00000000 00000000 00000000
> > 0073cee4
> > 00044494 00000000 00000075 00000000 00000000 00000000 00000000 00000000
> > 00097c77
> > entries in_hit in_slow_tot in_no_route in_brd in_martian_dst
> > in_martian_src out_hit out_slow_tot out_slow_mc gc_total gc_ignored
> > gc_goal_miss gc_dst_overflow in_hlist_search out_hlist_search
> > 0000f8cd 41403122 34c30e49 00000000 00000000 0000b0dd 00006a38 00000002
> > 0027279d 0004147b 00000000 2f528a5f 2f42efcf 0000104f 00000000 81d262df
> > 00c5d75c
> > 0000f8cd 0000e332 0003238f 00000000 00000000 00004263 00000000 00000000
> > 0000001e 000000c0 00000002 00004650 00004626 00000000 00000000 0016b879
> > 0000024c
> >
> >
> >
> >
> >
> > Wed Jan 12 14:06:36 CST 2005
> > eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
> > inet addr:10.253.0.1 Bcast:10.255.255.255 Mask:255.255.255.0
> > inet6 addr: fe80::202:b3ff:fed5:7e30/64 Scope:Link
> > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> > RX packets:28053281 errors:2899869 dropped:2899869
> > overruns:427243 frame:0
> > TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:2607094069 (2486.3 Mb) TX bytes:398 (398.0 b)
> > Base address:0x22a0 Memory:eff80000-effa0000
> >
> > 96154d20 150dc67c 00c78bc2 000d2659 00000000 00000000 00000000 00000000
> > 0073d31d
> > 00044499 00000000 00000075 00000000 00000000 00000000 00000000 00000000
> > 00097d2d
> > entries in_hit in_slow_tot in_no_route in_brd in_martian_dst
> > in_martian_src out_hit out_slow_tot out_slow_mc gc_total gc_ignored
> > gc_goal_miss gc_dst_overflow in_hlist_search out_hlist_search
> > 0000fc6a 419ad196 34c586ba 00000000 00000000 0000b0e0 00006a40 00000002
> > 002729c5 000414cb 00000000 2f5502ff 2f4567f7 0000104f 00000000 83790467
> > 00c5e33c
> > 0000fc6a 0000e333 00032393 00000000 00000000 00004263 00000000 00000000
> > 0000001e 000000c0 00000002 00004650 00004626 00000000 00000000 0016b898
> > 0000024c
> >
> > On Wednesday 12 January 2005 01:27 pm, Robert Olsson wrote:
> > > Jeremy M. Guthrie writes:
> > > > > > ETH3 Interrupts(calc'd from below): 1479968
> > > > >
> > > > > Very strange...
> > > > >
> > > > > eth3 is bound to CPU0 which in turn has all packet load... If we
> > > > > were to believe your CPU0 was saturated (due to the drops). We
> > > > > should see no (RX) interrupts on eth3. But there is a lot... one
> > > > > irq per every three packet. Why?
> > > >
> > > > I have no idea why it would be doing this.
> > >
> > > Huh seems you didn't add the patch I sent. Below is diff from my
> > > editor to your e1000_main.c
> > >
> > > --ro
> > >
> > >
> > > --- e1000_main.c.jmg 2005-01-12 20:14:08.324168072 +0100
> > > +++ e1000_main.c 2005-01-12 20:17:24.777302656 +0100
> > > @@ -2264,14 +2264,13 @@
> > > netdev->quota -= work_done;
> > >
> > > /* if no Rx and Tx cleanup work was done, exit the polling mode */
> > > - if(!tx_cleaned || (work_done < work_to_do) ||
> > > - !netif_running(netdev)) {
> > > + if( (!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
> > > quit_polling: netif_rx_complete(netdev);
> > > e1000_irq_enable(adapter);
> > > return 0;
> > > }
> > >
> > > - return (work_done >= work_to_do);
> > > + return 1;
> > > }
> > >
> > > #endif
> >
> > --
> >
> > --------------------------------------------------
> > Jeremy M. Guthrie jeremy.guthrie@berbee.com
> > Senior Network Engineer Phone: 608-298-1061
> > Berbee Fax: 608-288-3007
> > 5520 Research Park Drive NOC: 608-298-1102
> > Madison, WI 53711
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 20:45 ` Jeremy M. Guthrie
@ 2005-01-12 22:02 ` Robert Olsson
2005-01-12 22:21 ` Jeremy M. Guthrie
2005-01-12 22:05 ` Jeremy M. Guthrie
1 sibling, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-12 22:02 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger, Jesse Brandeburg
Jeremy M. Guthrie writes:
> My throughput dropped from 500 mbps to 8mbps. 8(
Weird!
> CPU0 CPU1
> 18: 3586173518 1815 IO-APIC-level eth3
> 20: 2 2464382507 IO-APIC-level eth2
> CPU0 CPU1
> 18: 3586173518 1815 IO-APIC-level eth3
> 20: 2 2464387985 IO-APIC-level eth2
There are no irq's on eth3 at all so RX softirq is constantly running.
Which means it's deferred to ksoftirqd now and running under scheduler
context. Do you have anything that competes with ksoftirqd for CPU0 on
your system?
It used be recommended to increase the priority of ksoftirqd but I wonder
what's going on your system. We see interrupts on eth2...
And tsquz (3:rd col) in /proc/net/sofnet_stat indicates there are very
little activity from the RX softirq. It's soon time time up here.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 20:45 ` Jeremy M. Guthrie
2005-01-12 22:02 ` Robert Olsson
@ 2005-01-12 22:05 ` Jeremy M. Guthrie
2005-01-12 22:22 ` Robert Olsson
1 sibling, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 22:05 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger, Jesse Brandeburg
[-- Attachment #1: Type: text/plain, Size: 628 bytes --]
I am now getting some push back from the project manager on this performance
problem. I am wondering if you think faster CPUs will
a) help relieve the symptoms of this problem
b) not help because now we will hit a '# of routes in the route-cache'
problem
c) or will help to a point till the # interrupts come back and bite us.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 22:02 ` Robert Olsson
@ 2005-01-12 22:21 ` Jeremy M. Guthrie
[not found] ` <16869.42247.126428.508479@robur.slu.se>
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 22:21 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger, Jesse Brandeburg
[-- Attachment #1: Type: text/plain, Size: 1368 bytes --]
On Wednesday 12 January 2005 04:02 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > My throughput dropped from 500 mbps to 8mbps. 8(
>
> Weird!
>
> > CPU0 CPU1
> > 18: 3586173518 1815 IO-APIC-level eth3
> > 20: 2 2464382507 IO-APIC-level eth2
> >
> > CPU0 CPU1
> > 18: 3586173518 1815 IO-APIC-level eth3
> > 20: 2 2464387985 IO-APIC-level eth2
>
> There are no irq's on eth3 at all so RX softirq is constantly running.
> Which means it's deferred to ksoftirqd now and running under scheduler
> context. Do you have anything that competes with ksoftirqd for CPU0 on
> your system?
This box is primarily routing. Nothing should be competing for CPU0
> It used be recommended to increase the priority of ksoftirqd but I wonder
> what's going on your system. We see interrupts on eth2...
>
> And tsquz (3:rd col) in /proc/net/sofnet_stat indicates there are very
> little activity from the RX softirq. It's soon time time up here.
>
>
> --ro
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 22:05 ` Jeremy M. Guthrie
@ 2005-01-12 22:22 ` Robert Olsson
2005-01-12 22:30 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-12 22:22 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson, Stephen Hemminger, Jesse Brandeburg
Jeremy M. Guthrie writes:
> I am now getting some push back from the project manager on this performance
> problem. I am wondering if you think faster CPUs will
> a) help relieve the symptoms of this problem
> b) not help because now we will hit a '# of routes in the route-cache'
> problem
> c) or will help to a point till the # interrupts come back and bite us.
Back out the patch I sent.and have hardirq's to run RX-softirq as you
did before but something is very wrong. You didn't answer if there were
other load on the machine...
route-cache can probably be tuned you as have four times the linear seach
I see in one PIII system at 110 kpps w. production traffic.
Of course the non-engineering solution is to buy more CPU... :-)
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 22:22 ` Robert Olsson
@ 2005-01-12 22:30 ` Jeremy M. Guthrie
0 siblings, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 22:30 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson, Stephen Hemminger, Jesse Brandeburg
[-- Attachment #1: Type: text/plain, Size: 1445 bytes --]
On Wednesday 12 January 2005 04:22 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > I am now getting some push back from the project manager on this
> > performance problem. I am wondering if you think faster CPUs will
> > a) help relieve the symptoms of this problem
> > b) not help because now we will hit a '# of routes in the route-cache'
> > problem
> > c) or will help to a point till the # interrupts come back and bite us.
>
> Back out the patch I sent.and have hardirq's to run RX-softirq as you
> did before but something is very wrong. You didn't answer if there were
> other load on the machine...
I have backed out. As for the load, this box only does policy routing. Any
other functions it performs are part of its automated system to download the
next days policy-routing config.
> route-cache can probably be tuned you as have four times the linear seach
> I see in one PIII system at 110 kpps w. production traffic.
How would I go about tuning that?
> Of course the non-engineering solution is to buy more CPU... :-)
That is good to know. This will help me calm the situation a bit. 8)
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
[not found] ` <16869.42247.126428.508479@robur.slu.se>
@ 2005-01-12 22:42 ` Jeremy M. Guthrie
2005-01-12 22:47 ` Jeremy M. Guthrie
1 sibling, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 22:42 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 2983 bytes --]
Wed Jan 12 16:36:29 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
RX packets:1868701 errors:894743 dropped:894743 overruns:410647
frame:0
ba659628 150dc67c 00c8ad95 000d2659 00000000 00000000 00000000 00000000
00764a9d
00045bd7 00000000 00000120 00000000 00000000 00000000 00000000 00000000
0009b5c4
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
00008e84 62d61b7d 37d9063a 00000000 00000000 0000b57e 00006c14 00000002
0027d72c 00042dfa 00000000 307d209e 30505aea 001d0a7c 001cf933 108c7bfb
00c976a6
00008e84 0000f0af 00032cd5 00000000 00000000 00004360 00000000 00000000
00000020 000000df 00000002 00004874 000046dd 0000016c 00000165 0016e55d
0000028f
CPU0 CPU1
18: 3680079907 1815 IO-APIC-level eth3
20: 2 2490755322 IO-APIC-level eth2
Wed Jan 12 16:37:30 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
RX packets:6313404 errors:1216174 dropped:1216174 overruns:509016
frame:0
baa97098 150dc67c 00c8e8ac 000d2659 00000000 00000000 00000000 00000000
00764df1
00045be8 00000000 00000120 00000000 00000000 00000000 00000000 00000000
0009b650
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
0000e10e 63177e31 37db7aec 00000000 00000000 0000b587 00006c22 00000002
0027d819 00042e29 00000000 307f9564 3052cf38 001d0a7f 001cf933 114eef21
00c97a1e
0000e10e 0000f0b5 00032ce0 00000000 00000000 00004360 00000000 00000000
00000020 000000e0 00000002 00004875 000046de 0000016c 00000165 0016e5a1
00000295
CPU0 CPU1
18: 3680513406 1815 IO-APIC-level eth3
20: 2 2490979016 IO-APIC-level eth2
On Wednesday 12 January 2005 04:30 pm, Robert Olsson wrote:
> Can you give this a last try. It's from an older driver
>
>
> e1000_clean(struct net_device *netdev, int *budget)
> {
> struct e1000_adapter *adapter = netdev->priv;
> int work_to_do = min(*budget, netdev->quota);
> int work_done = 0;
>
> e1000_clean_tx_irq(adapter);
> e1000_clean_rx_irq(adapter, &work_done, work_to_do);
>
> *budget -= work_done;
> netdev->quota -= work_done;
>
> if(work_done < work_to_do) {
> netif_rx_complete(netdev);
> e1000_irq_enable(adapter);
> }
>
> return (work_done >= work_to_do);
> }
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
[not found] ` <16869.42247.126428.508479@robur.slu.se>
2005-01-12 22:42 ` Jeremy M. Guthrie
@ 2005-01-12 22:47 ` Jeremy M. Guthrie
2005-01-12 23:19 ` Robert Olsson
1 sibling, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 22:47 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 1385 bytes --]
Here is what it looked like I was getting before from the old driver... that
took me down to 8mbps.
"printk: 4624 messages suppressed.
dst cache overflow
printk: 4630 messages suppressed.
dst cache overflow
printk: 4631 messages suppressed.
dst cache overflow
printk: 4646 messages suppressed.
dst cache overflow
printk: 4645 messages suppressed.
dst cache overflow"
On Wednesday 12 January 2005 04:30 pm, Robert Olsson wrote:
> Can you give this a last try. It's from an older driver
>
>
> e1000_clean(struct net_device *netdev, int *budget)
> {
> struct e1000_adapter *adapter = netdev->priv;
> int work_to_do = min(*budget, netdev->quota);
> int work_done = 0;
>
> e1000_clean_tx_irq(adapter);
> e1000_clean_rx_irq(adapter, &work_done, work_to_do);
>
> *budget -= work_done;
> netdev->quota -= work_done;
>
> if(work_done < work_to_do) {
> netif_rx_complete(netdev);
> e1000_irq_enable(adapter);
> }
>
> return (work_done >= work_to_do);
> }
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 22:47 ` Jeremy M. Guthrie
@ 2005-01-12 23:19 ` Robert Olsson
2005-01-12 23:23 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-12 23:19 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
> Here is what it looked like I was getting before from the old driver... that
> took me down to 8mbps.
>
> "printk: 4624 messages suppressed.
> dst cache overflow
> printk: 4630 messages suppressed.
> dst cache overflow
> printk: 4631 messages suppressed.
> dst cache overflow
> printk: 4646 messages suppressed.
> dst cache overflow
> printk: 4645 messages suppressed.
> dst cache overflow"
Thanks!
This a known problem. Remember I asked you about this. We're now in the
RCU route-hash problem again. This is not necessary solved by more CPU
as higher t-put forces more dst-entries to be freed aand we get
closer to max_size.
As your traffic looks sane double your bucket size of the route hash to
start with. Use the boot option w. rhash_entries. look at rtstat
--ro.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 23:19 ` Robert Olsson
@ 2005-01-12 23:23 ` Jeremy M. Guthrie
2005-01-13 8:56 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-12 23:23 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 1340 bytes --]
On Wednesday 12 January 2005 05:19 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > Here is what it looked like I was getting before from the old driver...
> > that took me down to 8mbps.
> >
> > "printk: 4624 messages suppressed.
> > dst cache overflow
> > printk: 4630 messages suppressed.
> > dst cache overflow
> > printk: 4631 messages suppressed.
> > dst cache overflow
> > printk: 4646 messages suppressed.
> > dst cache overflow
> > printk: 4645 messages suppressed.
> > dst cache overflow"
>
> Thanks!
>
> This a known problem. Remember I asked you about this. We're now in the
> RCU route-hash problem again. This is not necessary solved by more CPU
> as higher t-put forces more dst-entries to be freed aand we get
> closer to max_size.
>
> As your traffic looks sane double your bucket size of the route hash to
> start with. Use the boot option w. rhash_entries. look at rtstat
Does it make sense that the driver would kill throughput and force us to
output these types of messages?
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-12 23:23 ` Jeremy M. Guthrie
@ 2005-01-13 8:56 ` Robert Olsson
2005-01-13 19:28 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-13 8:56 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
> > As your traffic looks sane double your bucket size of the route hash to
> > start with. Use the boot option w. rhash_entries. look at rtstat
> Does it make sense that the driver would kill throughput and force us to
> output these types of messages?
No the other way around... Higher (driver/cpu) throughput/load causes more
dst-entries to freed and you reach the max_size*ip_rt_gc_min_interval which
is a constant and get "dst cache overflow".
Increasing rhash_entries is the easieast way to attack this.. Give this
a try. Monitor with rtstat. You might even have to quadruple your size.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-13 8:56 ` Robert Olsson
@ 2005-01-13 19:28 ` Jeremy M. Guthrie
2005-01-13 20:00 ` David S. Miller
2005-01-13 21:12 ` Robert Olsson
0 siblings, 2 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-13 19:28 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 5591 bytes --]
I after a few revs I just bumped rhash_entries to 2.4mil in an attempt to get
well above my actual usage.
You can see below I am over 600K entries before it blows them away and
restarts. How do I bump up the time from 10 minutes to something longer?
With the way our system works, entries should be good for a day as we won't
reprogram the policy route table but once a day.
I still have some instrumented network card drivers to work with but I now
show some 30-40% idle CPU on CPU0 but still with 0.3% packet loss. I'll post
stats once I get the instrumented drivers in.
size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
GC: tot ignored goal_miss ovrf HASH: in_search out_search
611167 98712 645 0 0 0 0 0 30 0
1074229262 645 643 0 0 61922 17
611461 96203 591 0 0 0 1 0 42 1
3220738034 592 590 0 0 63160 14
611759 93852 592 0 0 0 0 0 18 0
0 591 589 0 0 63078 6
612094 95276 632 0 0 0 0 0 28 0
0 632 630 0 0 63336 8
612368 94945 580 0 0 0 0 0 22 0
0 578 576 0 0 61224 18
612670 99258 622 0 0 0 0 0 28 0
0 621 619 0 0 63922 8
613025 93573 666 0 0 0 0 0 16 0
0 665 663 0 0 61781 6
613394 83917 722 0 0 0 0 0 8 0
0 721 719 0 0 55533 10
613697 85851 634 0 0 0 0 0 10 0
0 633 631 0 0 56394 12
613986 81854 611 0 0 0 0 0 8 0
0 610 608 0 0 54273 8
614349 81419 704 0 0 0 0 0 4 0
0 702 700 0 0 52641 8
614651 83312 616 0 0 0 0 0 14 0
0 617 615 0 0 54160 12
614962 83119 651 0 0 0 0 0 6 0
0 651 649 0 0 56612 4
615264 84871 583 0 0 0 0 0 10 0
0 583 581 0 0 56130 12
615521 83932 557 0 0 0 0 0 8 0
0 557 555 0 0 56229 10
615852 86368 626 0 0 0 0 0 8 0
0 624 622 0 0 56504 10
493558 47553 4603 0 0 0 0 0 2 0
0 4166 4164 0 0 28346 0
10091 46526 7096 0 0 0 0 0 2 3 0
0 0 0 0 554 0
16238 80565 6145 0 0 0 0 0 6 3 0
0 0 0 0 1334 0
21754 81224 5515 0 0 0 0 0 6 2 0
0 0 0 0 1793 0
size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
GC: tot ignored goal_miss ovrf HASH: in_search out_search
26737 80792 4982 0 0 0 0 0 5 2
1074229262 0 0 0 0 2669 0
31085 82895 4347 0 0 0 0 0 5 1
3220738034 0 0 0 0 2397 0
35333 83220 4248 0 0 0 0 0 1 1 0
0 0 0 0 2754 0
39053 83910 3720 0 0 0 0 0 7 1 0
0 0 0 0 3328 0
42692 82373 3634 0 0 0 0 0 8 6 0
0 0 0 0 3485 3
46404 84900 3707 0 0 0 0 0 15 7 0
0 0 0 0 3889 1
On Thursday 13 January 2005 02:56 am, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > > As your traffic looks sane double your bucket size of the route hash
> > > to start with. Use the boot option w. rhash_entries. look at rtstat
> >
> > Does it make sense that the driver would kill throughput and force us to
> > output these types of messages?
>
> No the other way around... Higher (driver/cpu) throughput/load causes more
> dst-entries to freed and you reach the max_size*ip_rt_gc_min_interval
> which is a constant and get "dst cache overflow".
>
> Increasing rhash_entries is the easieast way to attack this.. Give this
> a try. Monitor with rtstat. You might even have to quadruple your size.
>
> --ro
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-13 19:28 ` Jeremy M. Guthrie
@ 2005-01-13 20:00 ` David S. Miller
2005-01-13 20:43 ` Jeremy M. Guthrie
2005-01-13 21:12 ` Robert Olsson
1 sibling, 1 reply; 88+ messages in thread
From: David S. Miller @ 2005-01-13 20:00 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert.Olsson
On Thu, 13 Jan 2005 13:28:29 -0600
"Jeremy M. Guthrie" <jeremy.guthrie@berbee.com> wrote:
> You can see below I am over 600K entries before it blows them away and
> restarts. How do I bump up the time from 10 minutes to something longer?
> With the way our system works, entries should be good for a day as we won't
> reprogram the policy route table but once a day.
Increase /proc/sys/net/ipv4/ip_rt_secret_interval
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-13 20:00 ` David S. Miller
@ 2005-01-13 20:43 ` Jeremy M. Guthrie
2005-01-13 23:13 ` David S. Miller
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-13 20:43 UTC (permalink / raw)
To: netdev; +Cc: David S. Miller, Robert.Olsson
[-- Attachment #1: Type: text/plain, Size: 883 bytes --]
I don't show that proc file being there. I see it in 'iproute.c' but not in
proc.
On Thursday 13 January 2005 02:00 pm, David S. Miller wrote:
> On Thu, 13 Jan 2005 13:28:29 -0600
>
> "Jeremy M. Guthrie" <jeremy.guthrie@berbee.com> wrote:
> > You can see below I am over 600K entries before it blows them away and
> > restarts. How do I bump up the time from 10 minutes to something longer?
> > With the way our system works, entries should be good for a day as we
> > won't reprogram the policy route table but once a day.
>
> Increase /proc/sys/net/ipv4/ip_rt_secret_interval
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-13 19:28 ` Jeremy M. Guthrie
2005-01-13 20:00 ` David S. Miller
@ 2005-01-13 21:12 ` Robert Olsson
2005-01-13 22:27 ` Jeremy M. Guthrie
2005-01-14 14:59 ` Jeremy M. Guthrie
1 sibling, 2 replies; 88+ messages in thread
From: Robert Olsson @ 2005-01-13 21:12 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
> I after a few revs I just bumped rhash_entries to 2.4mil in an attempt to get
> well above my actual usage.
A bit hefty size :-) But the stats are looking much better as we do much
less linear search (in_search) in hash and less fib lookups (tot)
And you have now "dst cache overflows"?
Is the e1000 patch I sent in use?
> You can see below I am over 600K entries before it blows them away and
> restarts.
This is a part of GC process to reclaim memory and reclaim unused dst entries.
This
> size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
> GC: tot ignored goal_miss ovrf HASH: in_search out_search
> 615852 86368 626 0 0 0 0 0 8 0
> 0 624 622 0 0 56504 10
> 493558 47553 4603 0 0 0 0 0 2 0
> 0 4166 4164 0 0 28346 0
> 10091 46526 7096 0 0 0 0 0 2 3 0
> 0 0 0 0 554 0
> 16238 80565 6145 0 0 0 0 0 6 3 0
> 0 0 0 0 1334 0
In short we reduce the hash size to remove unused flows and let it grow again.
You see from (tot) that we have recreate may of the flows at this point. Most
likely this is where we drop the packets. We have monitored small drops in our
system when GC happens. The GC can be smoothen out but we leave that for now.
Is the e1000 patch I sent in use?
> How do I bump up the time from 10 minutes to something longer?
Davem pointed out another periodic task thats flushes the cache totally it's
/proc/sys/net/ipv4/route/secret_interval
It flushes the cache totally we so all current flows has be recreated. You
probably drop packets here in your setup. Yes it can be idea to increase it
or run the flush manually. But most routers drop packets now and then.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-13 21:12 ` Robert Olsson
@ 2005-01-13 22:27 ` Jeremy M. Guthrie
2005-01-14 15:44 ` Robert Olsson
2005-01-14 14:59 ` Jeremy M. Guthrie
1 sibling, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-13 22:27 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 8418 bytes --]
On Thursday 13 January 2005 03:12 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > I after a few revs I just bumped rhash_entries to 2.4mil in an attempt
> > to get well above my actual usage.
>
> A bit hefty size :-) But the stats are looking much better as we do much
> less linear search (in_search) in hash and less fib lookups (tot)
Okay.
> And you have now "dst cache overflows"?
No, I haven't gotten any of these yet.
> Is the e1000 patch I sent in use?
yes. I also have another E1000 driver I haven't had a chance to try yet. It
is a bit more instrumented.
> > You can see below I am over 600K entries before it blows them away and
> > restarts.
>
> This is a part of GC process to reclaim memory and reclaim unused dst
> entries. This
>
> > size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot
> > mc GC: tot ignored goal_miss ovrf HASH: in_search out_search
> >
> >
> > 615852 86368 626 0 0 0 0 0 8 0
> > 0 624 622 0 0 56504 10
> > 493558 47553 4603 0 0 0 0 0 2 0
> > 0 4166 4164 0 0 28346 0
> > 10091 46526 7096 0 0 0 0 0 2 3
> > 0 0 0 0 0 554 0
> > 16238 80565 6145 0 0 0 0 0 6 3
> > 0 0 0 0 0 1334 0
>
> In short we reduce the hash size to remove unused flows and let it grow
> again. You see from (tot) that we have recreate may of the flows at this
> point. Most likely this is where we drop the packets. We have monitored
> small drops in our system when GC happens. The GC can be smoothen out but
> we leave that for now.
Sorry, not quite following.
IN:Hits are cache hits yes? Tot, are the total number of flows created since
we last looked at the total flow count, correct? What would cause a packet
to drop in the network stack and thus showup in /proc/net/softnet_stat?
> > How do I bump up the time from 10 minutes to something longer?
>
> Davem pointed out another periodic task thats flushes the cache totally
> it's
>
> /proc/sys/net/ipv4/route/secret_interval
>
> It flushes the cache totally we so all current flows has be recreated. You
> probably drop packets here in your setup. Yes it can be idea to increase
> it or run the flush manually. But most routers drop packets now and then.
If I set the secret_interval to 60 seconds then I drop over 1% of all packets
coming through. So GC isn't exactly my friend.
Performance has picked up. I am not dropping packets anymore except during
GC. I upped my interval from 600 seconds to 1800 seconds.
Here are 15 second snapshots. Line 3 appears to be when GC take effect.
Afterwards, everything stabilizes. These numbers are much better.
Thu Jan 13 16:10:30 CST 2005 entries: 000de44a Packets: 1255162 Errors: 0
PPS: 83677 Percentage: 0.0%
Thu Jan 13 16:10:45 CST 2005 entries: 000df2ad Packets: 1303050 Errors:
3875 PPS: 86870 Percentage: 0.29%
Thu Jan 13 16:11:00 CST 2005 entries: 0000b053 Packets: 1265398 Errors:
38586 PPS: 84359 Percentage: 3.04%
Thu Jan 13 16:11:15 CST 2005 entries: 00013df8 Packets: 1310618 Errors: 0
PPS: 87374 Percentage: 0.0%
Thu Jan 13 16:11:30 CST 2005 entries: 0001b527 Packets: 1282435 Errors: 0
PPS: 85495 Percentage: 0.0%
Thu Jan 13 16:11:45 CST 2005 entries: 000222bb Packets: 1213217 Errors: 0
PPS: 80881 Percentage: 0.0%
Thu Jan 13 16:12:01 CST 2005 entries: 00027c7e Packets: 1279811 Errors: 0
PPS: 85320 Percentage: 0.0%
Thu Jan 13 16:12:16 CST 2005 entries: 0002c5d5 Packets: 1224232 Errors: 0
PPS: 81615 Percentage: 0.0%
Thu Jan 13 16:12:31 CST 2005 entries: 0003090c Packets: 1243539 Errors: 0
PPS: 82902 Percentage: 0.0%
Thu Jan 13 16:12:46 CST 2005 entries: 00034d41 Packets: 1267200 Errors: 0
PPS: 84480 Percentage: 0.0%
Thu Jan 13 16:13:01 CST 2005 entries: 00038f82 Packets: 1238821 Errors: 0
PPS: 82588 Percentage: 0.0%
Thu Jan 13 16:13:16 CST 2005 entries: 0003cf6a Packets: 1245474 Errors: 0
PPS: 83031 Percentage: 0.0%
Thu Jan 13 16:13:31 CST 2005 entries: 00040d23 Packets: 1266478 Errors: 0
PPS: 84431 Percentage: 0.0%
Thu Jan 13 16:13:46 CST 2005 entries: 00044918 Packets: 1247576 Errors: 0
PPS: 83171 Percentage: 0.0%
Thu Jan 13 16:14:01 CST 2005 entries: 00048520 Packets: 1223002 Errors: 0
PPS: 81533 Percentage: 0.0%
Thu Jan 13 16:14:16 CST 2005 entries: 0004c0b6 Packets: 1303942 Errors:
333 PPS: 86929 Percentage: 0.2%
Thu Jan 13 16:14:32 CST 2005 entries: 0004f83e Packets: 1203334 Errors: 0
PPS: 80222 Percentage: 0.0%
Thu Jan 13 16:14:47 CST 2005 entries: 00053241 Packets: 1216611 Errors: 0
PPS: 81107 Percentage: 0.0%
Thu Jan 13 16:15:02 CST 2005 entries: 00056f97 Packets: 1281206 Errors: 0
PPS: 85413 Percentage: 0.0%
Thu Jan 13 16:15:17 CST 2005 entries: 0005b020 Packets: 1270007 Errors: 0
PPS: 84667 Percentage: 0.0%
Thu Jan 13 16:15:32 CST 2005 entries: 0005eb63 Packets: 1250099 Errors: 0
PPS: 83339 Percentage: 0.0%
Thu Jan 13 16:15:47 CST 2005 entries: 00061e08 Packets: 1183444 Errors: 0
PPS: 78896 Percentage: 0.0%
Thu Jan 13 16:16:02 CST 2005 entries: 0006489b Packets: 1246170 Errors:
3791 PPS: 83078 Percentage: 0.30%
Thu Jan 13 16:16:17 CST 2005 entries: 00066f1f Packets: 1233601 Errors:
4141 PPS: 82240 Percentage: 0.33%
Thu Jan 13 16:16:32 CST 2005 entries: 000695aa Packets: 1273744 Errors:
3798 PPS: 84916 Percentage: 0.29%
Thu Jan 13 16:16:47 CST 2005 entries: 0006ba5d Packets: 1263619 Errors:
4219 PPS: 84241 Percentage: 0.33%
Thu Jan 13 16:17:03 CST 2005 entries: 0006df19 Packets: 1240743 Errors:
3616 PPS: 82716 Percentage: 0.29%
----------one other snapshot------------
Thu Jan 13 16:09:03 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
RX packets:459165122 errors:3427143 dropped:3427143 overruns:2045357
frame:0
1b5e031d 00000000 0000a829 00000000 00000000 00000000 00000000 00000000
0002cbd7
000072c1 00000000 00000001 00000000 00000000 00000000 00000000 00000000
00001e00
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
000d92e0 1a0ecfdc 014e19f9 00000000 00000000 000000a6 000000df 00000000
00009558 00000c5e 00000000 000b7605 000b6c68 00000000 00000000 07c9547f
0000398d
000d92e0 00001340 00005e40 00000000 00000000 0000005e 00000000 00000000
00000007 00000036 00000002 00000002 00000002 00000000 00000000 00001542
00000004
CPU0 CPU1
18: 123586344 8007 IO-APIC-level eth3
20: 1 18109191 IO-APIC-level eth2
Thu Jan 13 16:10:03 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
RX packets:464242944 errors:3427143 dropped:3427143 overruns:2045357
frame:0
1bab839b 00000000 0000a82d 00000000 00000000 00000000 00000000 00000000
0002d2bc
000072e3 00000000 00000001 00000000 00000000 00000000 00000000 00000000
00001ed8
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
000dcaba 1a5bd4fd 014e9288 00000000 00000000 000000a6 000000df 00000000
00009678 00000c6a 00000000 000bee9e 000be489 00000000 00000000 08109f0f
00003a97
000dcaba 00001349 00005e58 00000000 00000000 0000005e 00000000 00000000
00000007 00000036 00000002 00000002 00000002 00000000 00000000 00001597
00000004
CPU0 CPU1
18: 125388992 8007 IO-APIC-level eth3
20: 1 18340497 IO-APIC-level eth2
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
[not found] <C925F8B43D79CC49ACD0601FB68FF50C02D39006@orsmsx408>
@ 2005-01-13 22:55 ` Jeremy M. Guthrie
0 siblings, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-13 22:55 UTC (permalink / raw)
To: netdev; +Cc: Brandeburg, Jesse, Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 5387 bytes --]
Thu Jan 13 16:50:06 CST 2005 entries: 0000cc39 Packets: 1127110 Errors:
264136 PPS: 75140 Percentage: 23.43%
Thu Jan 13 16:50:22 CST 2005 entries: 000142c0 Packets: 1148930 Errors:
743 PPS: 76595 Percentage: 0.6%
Thu Jan 13 16:50:37 CST 2005 entries: 0001aa91 Packets: 1158591 Errors:
116 PPS: 77239 Percentage: 0.1%
Thu Jan 13 16:50:52 CST 2005 entries: 00021146 Packets: 1192241 Errors:
11648 PPS: 79482 Percentage: 0.97%
Thu Jan 13 16:51:07 CST 2005 entries: 00025c42 Packets: 1227489 Errors:
1056 PPS: 81832 Percentage: 0.8%
Thu Jan 13 16:51:22 CST 2005 entries: 00029ca0 Packets: 1217954 Errors:
365 PPS: 81196 Percentage: 0.2%
ethtool -S eth3
NIC statistics:
rx_packets: 8778549
tx_packets: 5
rx_bytes: 327267728
tx_bytes: 398
rx_errors: 575319
tx_errors: 0
rx_dropped: 360028
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 215291
rx_missed_errors: 215291
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 0
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 4622235024
rx_csum_offload_good: 8285347
int_tx_desc: 5
int_tx_queueempty: 6
int_link_state: 1
int_rx_frame_err: 0
int_rx_desc_min_thresh: 1330
int_rx_fifo_ovr: 47
int_rx_timer: 1768171
int_mdio: 0
int_rxcfg: 1
int_gpio_pins: 0
rx_csum_offload_errors: 698
On Wednesday 12 January 2005 06:49 pm, Brandeburg, Jesse wrote:
> I didn't send this to netdev.... if the interrupt counting code does
> something good then we can publish it.
>
> Jeremy, I would agree a faster CPU is going to help you handle more
> traffic. I can't speak to the routing thing. Your test would be very
> interesting if we could set up something similar here, unfortunately
> we're mostly interested in network device performance and not so much on
> kernel policy routing. I personally would be interested in having
> something set up to "play" with the driver on, but it may be doubtful
> how much time I would get to spend on it.
>
> Anyway, here is a driver that counts interrupt sources, you can get the
> counts from
> ethtool -S eth3
>
> you'll need to compile it like so:
> make CFLAGS_EXTRA=-DE1000_COUNT_ICR
>
> any messages in /var/log/messages from the network stack? (I just saw
> your netdev email about dst cache overflow) This driver has what we
> think should be the correct napi code in e1000_clean. If robert's fix
> works better for you then stick with it, and let me know cause what I'm
> sending you now is what we're going forward with unless we hear about
> problems.
>
> If you want to chat over an instant messenger of some kind here is my
> info:
> Aim: jbrandeb
> msn: go_jesse@hotmail.com
> yahoo: go_jesse
>
> I appreciate your patience as we try different stuff. I know I'm poking
> at the driver a lot, but the high interrupt counts seem a little weird
> given the load of your system.
>
> jesse
>
> -----Original Message-----
> From: Jeremy M. Guthrie [mailto:jeremy.guthrie@berbee.com]
> Sent: Wednesday, January 12, 2005 2:31 PM
> To: netdev@oss.sgi.com
> Cc: Robert Olsson; Stephen Hemminger; Brandeburg, Jesse
> Subject: Re: V2.4 policy router operates faster/better than V2.6
>
> On Wednesday 12 January 2005 04:22 pm, Robert Olsson wrote:
> > Jeremy M. Guthrie writes:
> > > I am now getting some push back from the project manager on this
> > > performance problem. I am wondering if you think faster CPUs will
> > > a) help relieve the symptoms of this problem
> > > b) not help because now we will hit a '# of routes in the
>
> route-cache'
>
> > > problem
> > > c) or will help to a point till the # interrupts come back and
>
> bite us.
>
> > Back out the patch I sent.and have hardirq's to run RX-softirq as you
> > did before but something is very wrong. You didn't answer if there
>
> were
>
> > other load on the machine...
>
> I have backed out. As for the load, this box only does policy routing.
> Any
> other functions it performs are part of its automated system to download
> the
> next days policy-routing config.
>
> > route-cache can probably be tuned you as have four times the linear
>
> seach
>
> > I see in one PIII system at 110 kpps w. production traffic.
>
> How would I go about tuning that?
>
> > Of course the non-engineering solution is to buy more CPU... :-)
>
> That is good to know. This will help me calm the situation a bit. 8)
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-13 20:43 ` Jeremy M. Guthrie
@ 2005-01-13 23:13 ` David S. Miller
0 siblings, 0 replies; 88+ messages in thread
From: David S. Miller @ 2005-01-13 23:13 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert.Olsson
On Thu, 13 Jan 2005 14:43:30 -0600
"Jeremy M. Guthrie" <jeremy.guthrie@berbee.com> wrote:
> I don't show that proc file being there. I see it in 'iproute.c' but not in
> proc.
Sorry, my bad, it's
/proc/sys/net/ipv4/route/secret_interval
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-13 21:12 ` Robert Olsson
2005-01-13 22:27 ` Jeremy M. Guthrie
@ 2005-01-14 14:59 ` Jeremy M. Guthrie
2005-01-14 16:05 ` Robert Olsson
1 sibling, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-14 14:59 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 44980 bytes --]
I ran a script overnight using the modified driver you had given me Robert.
it is interesting that there is almost always errors in the interface even
though we aren't getting dst-cache errors and running ~ 40% free CPU now. I
am going to switch over to Jesse's driver to see if his instrumentation helps
nail down where the problem is.
ethtool -S eth3
NIC statistics:
rx_packets: 2722676103
tx_packets: 5
rx_bytes: 1171335471
tx_bytes: 398
rx_errors: 8558366
tx_errors: 0
rx_dropped: 1951692
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 6606674
rx_missed_errors: 6606674
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 0
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 1384150804783
rx_csum_offload_good: 2542900690
rx_csum_offload_errors: 243513
01/13/05 19:20:13 entries: 000690f5 Pkts: 948466 Err: 2788 PPS: 63231
Drop %: 0.29% Eth3RXInt: 411656 Eth2TXInt: 52008
01/13/05 19:20:28 entries: 0006ac8e Pkts: 967236 Err: 2582 PPS: 64482
Drop %: 0.26% Eth3RXInt: 412282 Eth2TXInt: 51066
01/13/05 19:20:43 entries: 0006c6f3 Pkts: 974209 Err: 2645 PPS: 64947
Drop %: 0.27% Eth3RXInt: 412606 Eth2TXInt: 51439
01/13/05 19:20:58 entries: 0006de75 Pkts: 926103 Err: 2654 PPS: 61740
Drop %: 0.28% Eth3RXInt: 411864 Eth2TXInt: 51888
01/13/05 19:30:03 entries: 00091311 Pkts: 991885 Err: 2507 PPS: 66125
Drop %: 0.25% Eth3RXInt: 420214 Eth2TXInt: 44873
01/13/05 19:30:18 entries: 00091ecc Pkts: 935298 Err: 2656 PPS: 62353
Drop %: 0.28% Eth3RXInt: 411803 Eth2TXInt: 49733
01/13/05 19:30:33 entries: 00092a91 Pkts: 937829 Err: 2797 PPS: 62521
Drop %: 0.29% Eth3RXInt: 408503 Eth2TXInt: 49473
01/13/05 19:30:48 entries: 00093612 Pkts: 958149 Err: 2698 PPS: 63876
Drop %: 0.28% Eth3RXInt: 415769 Eth2TXInt: 49552
01/13/05 19:40:08 entries: 000ae2ac Pkts: 917843 Err: 2654 PPS: 61189
Drop %: 0.28% Eth3RXInt: 394963 Eth2TXInt: 51912
01/13/05 19:40:23 entries: 000af369 Pkts: 917445 Err: 2117 PPS: 61163
Drop %: 0.23% Eth3RXInt: 394155 Eth2TXInt: 53112
01/13/05 19:40:38 entries: 000b0272 Pkts: 931697 Err: 2763 PPS: 62113
Drop %: 0.29% Eth3RXInt: 403602 Eth2TXInt: 51899
01/13/05 19:40:53 entries: 00005de3 Pkts: 854339 Err: 83088 PPS: 56955
Drop %: 9.72% Eth3RXInt: 296451 Eth2TXInt: 62172
01/13/05 19:50:13 entries: 000669e0 Pkts: 910599 Err: 2646 PPS: 60706
Drop %: 0.29% Eth3RXInt: 402982 Eth2TXInt: 53797
01/13/05 19:50:28 entries: 00068591 Pkts: 921526 Err: 2782 PPS: 61435
Drop %: 0.30% Eth3RXInt: 408122 Eth2TXInt: 51792
01/13/05 19:50:44 entries: 0006a046 Pkts: 909343 Err: 2996 PPS: 60622
Drop %: 0.32% Eth3RXInt: 410888 Eth2TXInt: 53315
01/13/05 19:50:59 entries: 0006b968 Pkts: 904949 Err: 2656 PPS: 60329
Drop %: 0.29% Eth3RXInt: 410376 Eth2TXInt: 53444
01/13/05 20:00:03 entries: 0009188b Pkts: 907073 Err: 2539 PPS: 60471
Drop %: 0.27% Eth3RXInt: 408658 Eth2TXInt: 52399
01/13/05 20:00:18 entries: 000924f5 Pkts: 930178 Err: 2386 PPS: 62011
Drop %: 0.25% Eth3RXInt: 410595 Eth2TXInt: 51822
01/13/05 20:00:33 entries: 000931a3 Pkts: 895381 Err: 2447 PPS: 59692
Drop %: 0.27% Eth3RXInt: 406246 Eth2TXInt: 52993
01/13/05 20:00:49 entries: 00093c95 Pkts: 908271 Err: 2338 PPS: 60551
Drop %: 0.25% Eth3RXInt: 416026 Eth2TXInt: 53732
01/13/05 20:10:08 entries: 000ad87d Pkts: 911705 Err: 2432 PPS: 60780
Drop %: 0.26% Eth3RXInt: 416020 Eth2TXInt: 51364
01/13/05 20:10:23 entries: 000ae3fc Pkts: 961446 Err: 2385 PPS: 64096
Drop %: 0.24% Eth3RXInt: 418902 Eth2TXInt: 48358
01/13/05 20:10:39 entries: 000aee6a Pkts: 913636 Err: 2347 PPS: 60909
Drop %: 0.25% Eth3RXInt: 414232 Eth2TXInt: 51236
01/13/05 20:10:54 entries: 00005e36 Pkts: 888657 Err: 88852 PPS: 59243
Drop %: 9.99% Eth3RXInt: 307888 Eth2TXInt: 58891
01/13/05 20:20:14 entries: 00064ea9 Pkts: 837630 Err: 2339 PPS: 55842
Drop %: 0.27% Eth3RXInt: 398772 Eth2TXInt: 57373
01/13/05 20:20:29 entries: 0006687f Pkts: 863629 Err: 2205 PPS: 57575
Drop %: 0.25% Eth3RXInt: 403868 Eth2TXInt: 54794
01/13/05 20:20:44 entries: 00068299 Pkts: 872662 Err: 2487 PPS: 58177
Drop %: 0.28% Eth3RXInt: 404412 Eth2TXInt: 55279
01/13/05 20:20:59 entries: 0006995c Pkts: 848795 Err: 2537 PPS: 56586
Drop %: 0.29% Eth3RXInt: 402853 Eth2TXInt: 56477
01/13/05 20:30:04 entries: 0009b042 Pkts: 875158 Err: 2970 PPS: 58343
Drop %: 0.33% Eth3RXInt: 382571 Eth2TXInt: 55533
01/13/05 20:30:19 entries: 0009c17f Pkts: 880014 Err: 2311 PPS: 58667
Drop %: 0.26% Eth3RXInt: 385079 Eth2TXInt: 53440
01/13/05 20:30:34 entries: 0009d1d6 Pkts: 900716 Err: 2453 PPS: 60047
Drop %: 0.27% Eth3RXInt: 396762 Eth2TXInt: 54786
01/13/05 20:30:49 entries: 0009e22d Pkts: 897585 Err: 2453 PPS: 59839
Drop %: 0.27% Eth3RXInt: 396506 Eth2TXInt: 54240
01/13/05 20:40:08 entries: 000b915f Pkts: 827962 Err: 2565 PPS: 55197
Drop %: 0.30% Eth3RXInt: 404875 Eth2TXInt: 58387
01/13/05 20:40:24 entries: 000b9cf2 Pkts: 821150 Err: 2460 PPS: 54743
Drop %: 0.29% Eth3RXInt: 399397 Eth2TXInt: 58173
01/13/05 20:40:39 entries: 000baaef Pkts: 878751 Err: 2733 PPS: 58583
Drop %: 0.31% Eth3RXInt: 402052 Eth2TXInt: 55343
01/13/05 20:40:54 entries: 000062d3 Pkts: 815754 Err: 77364 PPS: 54383
Drop %: 9.48% Eth3RXInt: 298308 Eth2TXInt: 63697
01/13/05 20:50:13 entries: 000655f5 Pkts: 874449 Err: 2589 PPS: 58296
Drop %: 0.29% Eth3RXInt: 397470 Eth2TXInt: 55040
01/13/05 20:50:29 entries: 000670ef Pkts: 862434 Err: 2335 PPS: 57495
Drop %: 0.27% Eth3RXInt: 399384 Eth2TXInt: 55292
01/13/05 20:50:44 entries: 00068e59 Pkts: 904716 Err: 2520 PPS: 60314
Drop %: 0.27% Eth3RXInt: 401899 Eth2TXInt: 52789
01/13/05 20:50:59 entries: 0006a6af Pkts: 877349 Err: 2660 PPS: 58489
Drop %: 0.30% Eth3RXInt: 401982 Eth2TXInt: 54755
01/13/05 21:00:03 entries: 0008ffba Pkts: 835650 Err: 2127 PPS: 55710
Drop %: 0.25% Eth3RXInt: 406517 Eth2TXInt: 57597
01/13/05 21:00:18 entries: 00090a7f Pkts: 842742 Err: 2102 PPS: 56182
Drop %: 0.24% Eth3RXInt: 407024 Eth2TXInt: 57559
01/13/05 21:00:33 entries: 000915dd Pkts: 844459 Err: 2683 PPS: 56297
Drop %: 0.31% Eth3RXInt: 406624 Eth2TXInt: 58553
01/13/05 21:00:48 entries: 000920b6 Pkts: 810715 Err: 2626 PPS: 54047
Drop %: 0.32% Eth3RXInt: 401342 Eth2TXInt: 59538
01/13/05 21:10:08 entries: 000aa6a7 Pkts: 829066 Err: 2159 PPS: 55271
Drop %: 0.26% Eth3RXInt: 401210 Eth2TXInt: 56607
01/13/05 21:10:23 entries: 000ab10b Pkts: 833478 Err: 2350 PPS: 55565
Drop %: 0.28% Eth3RXInt: 404397 Eth2TXInt: 56495
01/13/05 21:10:38 entries: 000abb70 Pkts: 813251 Err: 2425 PPS: 54216
Drop %: 0.29% Eth3RXInt: 402135 Eth2TXInt: 57976
01/13/05 21:10:53 entries: 000ac12b Pkts: 836855 Err: 2315 PPS: 55790
Drop %: 0.27% Eth3RXInt: 404963 Eth2TXInt: 56857
01/13/05 21:20:13 entries: 000c8587 Pkts: 780311 Err: 2011 PPS: 52020
Drop %: 0.25% Eth3RXInt: 387437 Eth2TXInt: 60951
01/13/05 21:20:28 entries: 000c927c Pkts: 781288 Err: 1957 PPS: 52085
Drop %: 0.25% Eth3RXInt: 386866 Eth2TXInt: 60672
01/13/05 21:20:43 entries: 000c9f80 Pkts: 771294 Err: 2185 PPS: 51419
Drop %: 0.28% Eth3RXInt: 386213 Eth2TXInt: 61530
01/13/05 21:20:58 entries: 000cabe0 Pkts: 761870 Err: 1847 PPS: 50791
Drop %: 0.24% Eth3RXInt: 383979 Eth2TXInt: 60613
01/13/05 21:30:02 entries: 000e1471 Pkts: 752945 Err: 2363 PPS: 50196
Drop %: 0.31% Eth3RXInt: 385445 Eth2TXInt: 62462
01/13/05 21:30:17 entries: 000e20e0 Pkts: 746890 Err: 2179 PPS: 49792
Drop %: 0.29% Eth3RXInt: 385645 Eth2TXInt: 62886
01/13/05 21:30:32 entries: 000e2bf5 Pkts: 759932 Err: 2353 PPS: 50662
Drop %: 0.30% Eth3RXInt: 390137 Eth2TXInt: 61938
01/13/05 21:30:47 entries: 000e368c Pkts: 770359 Err: 2226 PPS: 51357
Drop %: 0.28% Eth3RXInt: 393035 Eth2TXInt: 62280
01/13/05 21:40:07 entries: 000f66ab Pkts: 710993 Err: 1425 PPS: 47399
Drop %: 0.20% Eth3RXInt: 385138 Eth2TXInt: 66570
01/13/05 21:40:22 entries: 000f6eea Pkts: 722199 Err: 1727 PPS: 48146
Drop %: 0.23% Eth3RXInt: 386471 Eth2TXInt: 65154
01/13/05 21:40:37 entries: 000f7731 Pkts: 721459 Err: 1858 PPS: 48097
Drop %: 0.25% Eth3RXInt: 388601 Eth2TXInt: 64437
01/13/05 21:40:52 entries: 00005160 Pkts: 717256 Err: 50031 PPS: 47817
Drop %: 6.97% Eth3RXInt: 301824 Eth2TXInt: 67610
01/13/05 21:50:11 entries: 00060c77 Pkts: 744283 Err: 2209 PPS: 49618
Drop %: 0.29% Eth3RXInt: 385202 Eth2TXInt: 61977
01/13/05 21:50:26 entries: 0006251c Pkts: 712155 Err: 1731 PPS: 47477
Drop %: 0.24% Eth3RXInt: 375784 Eth2TXInt: 63593
01/13/05 21:50:42 entries: 00063f3d Pkts: 732397 Err: 2502 PPS: 48826
Drop %: 0.34% Eth3RXInt: 382823 Eth2TXInt: 64629
01/13/05 21:50:57 entries: 00065ea4 Pkts: 732251 Err: 1954 PPS: 48816
Drop %: 0.26% Eth3RXInt: 372229 Eth2TXInt: 64307
01/13/05 22:00:01 entries: 0008ed30 Pkts: 753502 Err: 1919 PPS: 50233
Drop %: 0.25% Eth3RXInt: 392290 Eth2TXInt: 63585
01/13/05 22:00:16 entries: 0008f743 Pkts: 765954 Err: 1783 PPS: 51063
Drop %: 0.23% Eth3RXInt: 398195 Eth2TXInt: 64681
01/13/05 22:00:31 entries: 00090132 Pkts: 745575 Err: 1748 PPS: 49705
Drop %: 0.23% Eth3RXInt: 393391 Eth2TXInt: 63904
01/13/05 22:00:46 entries: 00090bf1 Pkts: 750483 Err: 1810 PPS: 50032
Drop %: 0.24% Eth3RXInt: 395678 Eth2TXInt: 65592
01/13/05 22:10:05 entries: 000a69b4 Pkts: 740410 Err: 4101 PPS: 49360
Drop %: 0.55% Eth3RXInt: 393872 Eth2TXInt: 66508
01/13/05 22:10:20 entries: 000a732d Pkts: 737503 Err: 3481 PPS: 49166
Drop %: 0.47% Eth3RXInt: 392793 Eth2TXInt: 67370
01/13/05 22:10:35 entries: 000a7d9a Pkts: 735129 Err: 3227 PPS: 49008
Drop %: 0.43% Eth3RXInt: 394918 Eth2TXInt: 67208
01/13/05 22:10:51 entries: 000a87d0 Pkts: 731617 Err: 3099 PPS: 48774
Drop %: 0.42% Eth3RXInt: 391164 Eth2TXInt: 67757
01/13/05 22:20:09 entries: 000bcb09 Pkts: 674113 Err: 3321 PPS: 44940
Drop %: 0.49% Eth3RXInt: 379562 Eth2TXInt: 69421
01/13/05 22:20:25 entries: 000bd3b8 Pkts: 666534 Err: 2801 PPS: 44435
Drop %: 0.42% Eth3RXInt: 377072 Eth2TXInt: 70649
01/13/05 22:20:40 entries: 000bdd19 Pkts: 678034 Err: 2940 PPS: 45202
Drop %: 0.43% Eth3RXInt: 378198 Eth2TXInt: 69143
01/13/05 22:20:55 entries: 000be345 Pkts: 661543 Err: 2829 PPS: 44102
Drop %: 0.42% Eth3RXInt: 373469 Eth2TXInt: 69994
01/13/05 22:30:14 entries: 000d1135 Pkts: 678235 Err: 1982 PPS: 45215
Drop %: 0.29% Eth3RXInt: 373441 Eth2TXInt: 71223
01/13/05 22:30:29 entries: 000d1bf0 Pkts: 685493 Err: 1983 PPS: 45699
Drop %: 0.28% Eth3RXInt: 373410 Eth2TXInt: 68586
01/13/05 22:30:44 entries: 000d26eb Pkts: 684653 Err: 2007 PPS: 45643
Drop %: 0.29% Eth3RXInt: 373354 Eth2TXInt: 68304
01/13/05 22:30:59 entries: 000d2e78 Pkts: 664648 Err: 1527 PPS: 44309
Drop %: 0.22% Eth3RXInt: 365428 Eth2TXInt: 67413
01/13/05 22:40:03 entries: 000e44b4 Pkts: 645300 Err: 1954 PPS: 43020
Drop %: 0.30% Eth3RXInt: 369820 Eth2TXInt: 72192
01/13/05 22:40:18 entries: 000e4c66 Pkts: 643369 Err: 1702 PPS: 42891
Drop %: 0.26% Eth3RXInt: 366982 Eth2TXInt: 72002
01/13/05 22:40:33 entries: 000e5423 Pkts: 634654 Err: 1873 PPS: 42310
Drop %: 0.29% Eth3RXInt: 367668 Eth2TXInt: 72011
01/13/05 22:40:48 entries: 00002be0 Pkts: 626759 Err: 21655 PPS: 41783
Drop %: 3.45% Eth3RXInt: 317916 Eth2TXInt: 72957
01/13/05 22:50:07 entries: 00057b41 Pkts: 636115 Err: 1548 PPS: 42407
Drop %: 0.24% Eth3RXInt: 356743 Eth2TXInt: 70117
01/13/05 22:50:22 entries: 00059254 Pkts: 690168 Err: 2081 PPS: 46011
Drop %: 0.30% Eth3RXInt: 375141 Eth2TXInt: 69588
01/13/05 22:50:37 entries: 0005a923 Pkts: 667265 Err: 1647 PPS: 44484
Drop %: 0.24% Eth3RXInt: 367504 Eth2TXInt: 69765
01/13/05 22:50:52 entries: 0005be38 Pkts: 646829 Err: 1806 PPS: 43121
Drop %: 0.27% Eth3RXInt: 361265 Eth2TXInt: 72190
01/13/05 23:00:11 entries: 0008b137 Pkts: 616932 Err: 1566 PPS: 41128
Drop %: 0.25% Eth3RXInt: 361653 Eth2TXInt: 74776
01/13/05 23:00:26 entries: 0008bd2f Pkts: 655216 Err: 2433 PPS: 43681
Drop %: 0.37% Eth3RXInt: 366736 Eth2TXInt: 72637
01/13/05 23:00:42 entries: 0008c6ce Pkts: 650050 Err: 1792 PPS: 43336
Drop %: 0.27% Eth3RXInt: 369516 Eth2TXInt: 74231
01/13/05 23:00:57 entries: 0008cf8a Pkts: 622743 Err: 1628 PPS: 41516
Drop %: 0.26% Eth3RXInt: 362609 Eth2TXInt: 73576
01/13/05 23:10:00 entries: 0009ec88 Pkts: 558466 Err: 1254 PPS: 37231
Drop %: 0.22% Eth3RXInt: 341801 Eth2TXInt: 77394
01/13/05 23:10:15 entries: 0009f446 Pkts: 583270 Err: 1727 PPS: 38884
Drop %: 0.29% Eth3RXInt: 351413 Eth2TXInt: 77099
01/13/05 23:10:31 entries: 0009fd65 Pkts: 585702 Err: 1378 PPS: 39046
Drop %: 0.23% Eth3RXInt: 352610 Eth2TXInt: 76484
01/13/05 23:10:46 entries: 000a061f Pkts: 591411 Err: 1389 PPS: 39427
Drop %: 0.23% Eth3RXInt: 351609 Eth2TXInt: 76353
01/13/05 23:20:04 entries: 000b177a Pkts: 630155 Err: 1788 PPS: 42010
Drop %: 0.28% Eth3RXInt: 368741 Eth2TXInt: 76933
01/13/05 23:20:19 entries: 000b2070 Pkts: 608282 Err: 1147 PPS: 40552
Drop %: 0.18% Eth3RXInt: 356552 Eth2TXInt: 76472
01/13/05 23:20:34 entries: 000b28ae Pkts: 560094 Err: 1474 PPS: 37339
Drop %: 0.26% Eth3RXInt: 341610 Eth2TXInt: 77689
01/13/05 23:20:50 entries: 000b314f Pkts: 559777 Err: 1268 PPS: 37318
Drop %: 0.22% Eth3RXInt: 342036 Eth2TXInt: 77905
01/13/05 23:30:08 entries: 000c3e43 Pkts: 602182 Err: 1643 PPS: 40145
Drop %: 0.27% Eth3RXInt: 360109 Eth2TXInt: 76867
01/13/05 23:30:23 entries: 000c456a Pkts: 544079 Err: 1293 PPS: 36271
Drop %: 0.23% Eth3RXInt: 337942 Eth2TXInt: 78707
01/13/05 23:30:38 entries: 000c4d58 Pkts: 571859 Err: 1606 PPS: 38123
Drop %: 0.28% Eth3RXInt: 348591 Eth2TXInt: 78355
01/13/05 23:30:53 entries: 000c4f8d Pkts: 553967 Err: 1299 PPS: 36931
Drop %: 0.23% Eth3RXInt: 340507 Eth2TXInt: 77327
01/13/05 23:40:12 entries: 000d5cd4 Pkts: 571850 Err: 1459 PPS: 38123
Drop %: 0.25% Eth3RXInt: 342271 Eth2TXInt: 77382
01/13/05 23:40:27 entries: 000d66eb Pkts: 537062 Err: 1233 PPS: 35804
Drop %: 0.22% Eth3RXInt: 326360 Eth2TXInt: 79641
01/13/05 23:40:42 entries: 000d700d Pkts: 538091 Err: 1640 PPS: 35872
Drop %: 0.30% Eth3RXInt: 331841 Eth2TXInt: 79667
01/13/05 23:40:57 entries: 00006e93 Pkts: 534710 Err: 14693 PPS: 35647
Drop %: 2.74% Eth3RXInt: 255177 Eth2TXInt: 78981
01/13/05 23:50:01 entries: 00056e16 Pkts: 574215 Err: 1233 PPS: 38281
Drop %: 0.21% Eth3RXInt: 337409 Eth2TXInt: 78770
01/13/05 23:50:16 entries: 00058557 Pkts: 537083 Err: 1186 PPS: 35805
Drop %: 0.22% Eth3RXInt: 324471 Eth2TXInt: 78352
01/13/05 23:50:31 entries: 00059b52 Pkts: 565715 Err: 1586 PPS: 37714
Drop %: 0.28% Eth3RXInt: 340689 Eth2TXInt: 79699
01/13/05 23:50:46 entries: 0005af99 Pkts: 558546 Err: 1355 PPS: 37236
Drop %: 0.24% Eth3RXInt: 337133 Eth2TXInt: 78779
01/14/05 00:00:05 entries: 0008504d Pkts: 588805 Err: 1507 PPS: 39253
Drop %: 0.25% Eth3RXInt: 346266 Eth2TXInt: 77616
01/14/05 00:00:20 entries: 00085c27 Pkts: 570425 Err: 1672 PPS: 38028
Drop %: 0.29% Eth3RXInt: 338249 Eth2TXInt: 78085
01/14/05 00:00:35 entries: 00086908 Pkts: 578050 Err: 1327 PPS: 38536
Drop %: 0.22% Eth3RXInt: 340259 Eth2TXInt: 77927
01/14/05 00:00:50 entries: 00087438 Pkts: 566396 Err: 1135 PPS: 37759
Drop %: 0.20% Eth3RXInt: 338626 Eth2TXInt: 77983
01/14/05 00:10:09 entries: 0009a1f9 Pkts: 527344 Err: 1364 PPS: 35156
Drop %: 0.25% Eth3RXInt: 330864 Eth2TXInt: 80726
01/14/05 00:10:24 entries: 0009a950 Pkts: 519107 Err: 1251 PPS: 34607
Drop %: 0.24% Eth3RXInt: 327638 Eth2TXInt: 81313
01/14/05 00:10:39 entries: 0009b0d4 Pkts: 527377 Err: 1085 PPS: 35158
Drop %: 0.20% Eth3RXInt: 330084 Eth2TXInt: 80885
01/14/05 00:10:54 entries: 0009b7e8 Pkts: 527706 Err: 1603 PPS: 35180
Drop %: 0.30% Eth3RXInt: 331284 Eth2TXInt: 80577
01/14/05 00:20:13 entries: 000aa8fa Pkts: 477646 Err: 1147 PPS: 31843
Drop %: 0.24% Eth3RXInt: 313514 Eth2TXInt: 81244
01/14/05 00:20:28 entries: 000aaf4c Pkts: 506930 Err: 1148 PPS: 33795
Drop %: 0.22% Eth3RXInt: 324715 Eth2TXInt: 80366
01/14/05 00:20:43 entries: 000ab666 Pkts: 538447 Err: 1783 PPS: 35896
Drop %: 0.33% Eth3RXInt: 340587 Eth2TXInt: 80129
01/14/05 00:20:58 entries: 000abc90 Pkts: 503351 Err: 1151 PPS: 33556
Drop %: 0.22% Eth3RXInt: 324926 Eth2TXInt: 81205
01/14/05 00:30:01 entries: 000b95f8 Pkts: 503581 Err: 1588 PPS: 33572
Drop %: 0.31% Eth3RXInt: 323090 Eth2TXInt: 81310
01/14/05 00:30:16 entries: 000b9c10 Pkts: 509557 Err: 1373 PPS: 33970
Drop %: 0.26% Eth3RXInt: 327946 Eth2TXInt: 80876
01/14/05 00:30:31 entries: 000ba263 Pkts: 498061 Err: 1624 PPS: 33204
Drop %: 0.32% Eth3RXInt: 320360 Eth2TXInt: 80399
01/14/05 00:30:47 entries: 000ba8af Pkts: 508690 Err: 1030 PPS: 33912
Drop %: 0.20% Eth3RXInt: 325848 Eth2TXInt: 80562
01/14/05 00:40:05 entries: 000c8eb0 Pkts: 423203 Err: 925 PPS: 28213
Drop %: 0.21% Eth3RXInt: 289128 Eth2TXInt: 81558
01/14/05 00:40:20 entries: 000c9460 Pkts: 435865 Err: 905 PPS: 29057
Drop %: 0.20% Eth3RXInt: 290040 Eth2TXInt: 81407
01/14/05 00:40:35 entries: 000c9b1d Pkts: 470792 Err: 987 PPS: 31386
Drop %: 0.20% Eth3RXInt: 306570 Eth2TXInt: 80345
01/14/05 00:40:50 entries: 00003ace Pkts: 455627 Err: 7859 PPS: 30375
Drop %: 1.72% Eth3RXInt: 261150 Eth2TXInt: 80456
01/14/05 00:50:09 entries: 0004b083 Pkts: 491755 Err: 1253 PPS: 32783
Drop %: 0.25% Eth3RXInt: 316362 Eth2TXInt: 81009
01/14/05 00:50:24 entries: 0004c2e2 Pkts: 465117 Err: 975 PPS: 31007
Drop %: 0.20% Eth3RXInt: 301050 Eth2TXInt: 81862
01/14/05 00:50:39 entries: 0004d6dc Pkts: 459493 Err: 1327 PPS: 30632
Drop %: 0.28% Eth3RXInt: 298610 Eth2TXInt: 81555
01/14/05 00:50:54 entries: 0004eb2d Pkts: 487190 Err: 887 PPS: 32479
Drop %: 0.18% Eth3RXInt: 313779 Eth2TXInt: 81653
01/14/05 01:00:12 entries: 0007333a Pkts: 511807 Err: 1015 PPS: 34120
Drop %: 0.19% Eth3RXInt: 326166 Eth2TXInt: 80991
01/14/05 01:00:28 entries: 000742bb Pkts: 490129 Err: 1441 PPS: 32675
Drop %: 0.29% Eth3RXInt: 315417 Eth2TXInt: 81048
01/14/05 01:00:43 entries: 00075002 Pkts: 480424 Err: 1108 PPS: 32028
Drop %: 0.23% Eth3RXInt: 312870 Eth2TXInt: 81955
01/14/05 01:00:58 entries: 00075ab5 Pkts: 479570 Err: 1305 PPS: 31971
Drop %: 0.27% Eth3RXInt: 314682 Eth2TXInt: 81203
01/14/05 01:10:01 entries: 0008979c Pkts: 450310 Err: 1045 PPS: 30020
Drop %: 0.23% Eth3RXInt: 290075 Eth2TXInt: 81569
01/14/05 01:10:16 entries: 00089faf Pkts: 433228 Err: 1053 PPS: 28881
Drop %: 0.24% Eth3RXInt: 285539 Eth2TXInt: 81461
01/14/05 01:10:31 entries: 0008a763 Pkts: 440048 Err: 1367 PPS: 29336
Drop %: 0.31% Eth3RXInt: 291495 Eth2TXInt: 81925
01/14/05 01:10:46 entries: 0008af1e Pkts: 501093 Err: 1510 PPS: 33406
Drop %: 0.30% Eth3RXInt: 319267 Eth2TXInt: 82134
01/14/05 01:20:05 entries: 0009a978 Pkts: 484885 Err: 1591 PPS: 32325
Drop %: 0.32% Eth3RXInt: 304961 Eth2TXInt: 80773
01/14/05 01:20:20 entries: 0009b485 Pkts: 497080 Err: 1484 PPS: 33138
Drop %: 0.29% Eth3RXInt: 310173 Eth2TXInt: 81031
01/14/05 01:20:35 entries: 0009bfd7 Pkts: 503000 Err: 1555 PPS: 33533
Drop %: 0.30% Eth3RXInt: 315228 Eth2TXInt: 81955
01/14/05 01:20:50 entries: 0009cb17 Pkts: 473078 Err: 1004 PPS: 31538
Drop %: 0.21% Eth3RXInt: 302143 Eth2TXInt: 82036
01/14/05 01:30:09 entries: 000acb27 Pkts: 485756 Err: 1048 PPS: 32383
Drop %: 0.21% Eth3RXInt: 315112 Eth2TXInt: 81381
01/14/05 01:30:24 entries: 000ade3e Pkts: 487954 Err: 1116 PPS: 32530
Drop %: 0.22% Eth3RXInt: 296615 Eth2TXInt: 80622
01/14/05 01:30:39 entries: 000b1d66 Pkts: 496891 Err: 1365 PPS: 33126
Drop %: 0.27% Eth3RXInt: 245583 Eth2TXInt: 79328
01/14/05 01:30:54 entries: 000b55c3 Pkts: 525362 Err: 1661 PPS: 35024
Drop %: 0.31% Eth3RXInt: 249821 Eth2TXInt: 76762
01/14/05 01:40:12 entries: 000c5d9b Pkts: 441275 Err: 1262 PPS: 29418
Drop %: 0.28% Eth3RXInt: 295788 Eth2TXInt: 81607
01/14/05 01:40:27 entries: 000c638d Pkts: 432402 Err: 818 PPS: 28826
Drop %: 0.18% Eth3RXInt: 292080 Eth2TXInt: 82215
01/14/05 01:40:42 entries: 000c6a20 Pkts: 475611 Err: 1137 PPS: 31707
Drop %: 0.23% Eth3RXInt: 312107 Eth2TXInt: 82512
01/14/05 01:40:58 entries: 000062a2 Pkts: 422599 Err: 3778 PPS: 28173
Drop %: 0.89% Eth3RXInt: 236232 Eth2TXInt: 81681
01/14/05 01:50:01 entries: 00047c87 Pkts: 434595 Err: 1120 PPS: 28973
Drop %: 0.25% Eth3RXInt: 291004 Eth2TXInt: 81547
01/14/05 01:50:16 entries: 0004924f Pkts: 473118 Err: 1018 PPS: 31541
Drop %: 0.21% Eth3RXInt: 306494 Eth2TXInt: 81818
01/14/05 01:50:31 entries: 0004a615 Pkts: 454845 Err: 807 PPS: 30323
Drop %: 0.17% Eth3RXInt: 300121 Eth2TXInt: 81781
01/14/05 01:50:46 entries: 0004b989 Pkts: 446690 Err: 814 PPS: 29779
Drop %: 0.18% Eth3RXInt: 295153 Eth2TXInt: 82147
01/14/05 02:00:05 entries: 00071828 Pkts: 485939 Err: 1181 PPS: 32395
Drop %: 0.24% Eth3RXInt: 311765 Eth2TXInt: 81409
01/14/05 02:00:20 entries: 000727ca Pkts: 473909 Err: 1384 PPS: 31593
Drop %: 0.29% Eth3RXInt: 307111 Eth2TXInt: 81430
01/14/05 02:00:35 entries: 00073806 Pkts: 477835 Err: 1173 PPS: 31855
Drop %: 0.24% Eth3RXInt: 308946 Eth2TXInt: 81813
01/14/05 02:00:50 entries: 00074824 Pkts: 481520 Err: 918 PPS: 32101
Drop %: 0.19% Eth3RXInt: 310852 Eth2TXInt: 81876
01/14/05 02:10:08 entries: 0008a6fc Pkts: 441730 Err: 788 PPS: 29448
Drop %: 0.17% Eth3RXInt: 296119 Eth2TXInt: 82040
01/14/05 02:10:23 entries: 0008ae30 Pkts: 462573 Err: 964 PPS: 30838
Drop %: 0.20% Eth3RXInt: 305317 Eth2TXInt: 82031
01/14/05 02:10:39 entries: 0008b5ce Pkts: 462473 Err: 1037 PPS: 30831
Drop %: 0.22% Eth3RXInt: 305753 Eth2TXInt: 81449
01/14/05 02:10:54 entries: 0008baee Pkts: 433606 Err: 885 PPS: 28907
Drop %: 0.20% Eth3RXInt: 291876 Eth2TXInt: 81471
01/14/05 02:20:12 entries: 0009a92d Pkts: 458960 Err: 851 PPS: 30597
Drop %: 0.18% Eth3RXInt: 303308 Eth2TXInt: 81503
01/14/05 02:20:27 entries: 0009b002 Pkts: 498472 Err: 1221 PPS: 33231
Drop %: 0.24% Eth3RXInt: 325077 Eth2TXInt: 81679
01/14/05 02:20:42 entries: 0009b68c Pkts: 484378 Err: 1176 PPS: 32291
Drop %: 0.24% Eth3RXInt: 317011 Eth2TXInt: 80552
01/14/05 02:20:57 entries: 0009b9cf Pkts: 448711 Err: 932 PPS: 29914
Drop %: 0.20% Eth3RXInt: 300504 Eth2TXInt: 81309
01/14/05 02:30:01 entries: 000abe83 Pkts: 485720 Err: 1211 PPS: 32381
Drop %: 0.24% Eth3RXInt: 317102 Eth2TXInt: 81805
01/14/05 02:30:16 entries: 000ac418 Pkts: 456824 Err: 770 PPS: 30454
Drop %: 0.16% Eth3RXInt: 303707 Eth2TXInt: 82063
01/14/05 02:30:31 entries: 000aca2e Pkts: 464268 Err: 926 PPS: 30951
Drop %: 0.19% Eth3RXInt: 307084 Eth2TXInt: 82922
01/14/05 02:30:46 entries: 000acfd9 Pkts: 488002 Err: 1199 PPS: 32533
Drop %: 0.24% Eth3RXInt: 317994 Eth2TXInt: 82220
01/14/05 02:40:04 entries: 000b939f Pkts: 426137 Err: 1071 PPS: 28409
Drop %: 0.25% Eth3RXInt: 291613 Eth2TXInt: 80942
01/14/05 02:40:19 entries: 000b99b8 Pkts: 435682 Err: 641 PPS: 29045
Drop %: 0.14% Eth3RXInt: 295451 Eth2TXInt: 81203
01/14/05 02:40:34 entries: 000ba164 Pkts: 469842 Err: 1099 PPS: 31322
Drop %: 0.23% Eth3RXInt: 310350 Eth2TXInt: 81309
01/14/05 02:40:50 entries: 000039d8 Pkts: 432433 Err: 4158 PPS: 28828
Drop %: 0.96% Eth3RXInt: 258916 Eth2TXInt: 79816
01/14/05 02:50:08 entries: 00045e02 Pkts: 411631 Err: 1073 PPS: 27442
Drop %: 0.26% Eth3RXInt: 278161 Eth2TXInt: 81417
01/14/05 02:50:23 entries: 00046f7b Pkts: 410141 Err: 795 PPS: 27342
Drop %: 0.19% Eth3RXInt: 275966 Eth2TXInt: 81285
01/14/05 02:50:38 entries: 000481d2 Pkts: 433426 Err: 1245 PPS: 28895
Drop %: 0.28% Eth3RXInt: 289269 Eth2TXInt: 81567
01/14/05 02:50:53 entries: 000493db Pkts: 412782 Err: 969 PPS: 27518
Drop %: 0.23% Eth3RXInt: 277938 Eth2TXInt: 81170
01/14/05 03:00:12 entries: 00075b30 Pkts: 525329 Err: 1144 PPS: 35021
Drop %: 0.21% Eth3RXInt: 335221 Eth2TXInt: 82922
01/14/05 03:00:27 entries: 00076ad2 Pkts: 517248 Err: 1513 PPS: 34483
Drop %: 0.29% Eth3RXInt: 325825 Eth2TXInt: 82644
01/14/05 03:00:42 entries: 000778c5 Pkts: 455097 Err: 1211 PPS: 30339
Drop %: 0.26% Eth3RXInt: 301609 Eth2TXInt: 81940
01/14/05 03:00:57 entries: 000785d8 Pkts: 472256 Err: 1428 PPS: 31483
Drop %: 0.30% Eth3RXInt: 308709 Eth2TXInt: 82655
01/14/05 03:10:00 entries: 0008c8fd Pkts: 474980 Err: 1191 PPS: 31665
Drop %: 0.25% Eth3RXInt: 305622 Eth2TXInt: 80539
01/14/05 03:10:15 entries: 0008d062 Pkts: 457870 Err: 1130 PPS: 30524
Drop %: 0.24% Eth3RXInt: 296680 Eth2TXInt: 82396
01/14/05 03:10:30 entries: 0008d73b Pkts: 480551 Err: 1381 PPS: 32036
Drop %: 0.28% Eth3RXInt: 309356 Eth2TXInt: 81644
01/14/05 03:10:46 entries: 0008ddd3 Pkts: 465617 Err: 950 PPS: 31041
Drop %: 0.20% Eth3RXInt: 304579 Eth2TXInt: 82222
01/14/05 03:20:04 entries: 0009e568 Pkts: 457937 Err: 1151 PPS: 30529
Drop %: 0.25% Eth3RXInt: 305244 Eth2TXInt: 81326
01/14/05 03:20:19 entries: 0009eb65 Pkts: 436354 Err: 938 PPS: 29090
Drop %: 0.21% Eth3RXInt: 292012 Eth2TXInt: 81560
01/14/05 03:20:34 entries: 0009f1d4 Pkts: 420677 Err: 863 PPS: 28045
Drop %: 0.20% Eth3RXInt: 285478 Eth2TXInt: 81628
01/14/05 03:20:49 entries: 0009f71d Pkts: 451901 Err: 838 PPS: 30126
Drop %: 0.18% Eth3RXInt: 302797 Eth2TXInt: 81472
01/14/05 03:30:08 entries: 000ab0e6 Pkts: 496468 Err: 935 PPS: 33097
Drop %: 0.18% Eth3RXInt: 313564 Eth2TXInt: 81825
01/14/05 03:30:23 entries: 000ab6d9 Pkts: 487412 Err: 1234 PPS: 32494
Drop %: 0.25% Eth3RXInt: 311207 Eth2TXInt: 81957
01/14/05 03:30:38 entries: 000abd71 Pkts: 486905 Err: 1197 PPS: 32460
Drop %: 0.24% Eth3RXInt: 309132 Eth2TXInt: 81431
01/14/05 03:30:53 entries: 000ac18a Pkts: 501542 Err: 1095 PPS: 33436
Drop %: 0.21% Eth3RXInt: 317566 Eth2TXInt: 82225
01/14/05 03:40:11 entries: 000b7c09 Pkts: 453065 Err: 1297 PPS: 30204
Drop %: 0.28% Eth3RXInt: 301844 Eth2TXInt: 81878
01/14/05 03:40:26 entries: 000b80c8 Pkts: 419895 Err: 794 PPS: 27993
Drop %: 0.18% Eth3RXInt: 286286 Eth2TXInt: 81854
01/14/05 03:40:41 entries: 000b85bf Pkts: 437850 Err: 871 PPS: 29190
Drop %: 0.19% Eth3RXInt: 296486 Eth2TXInt: 81568
01/14/05 03:40:56 entries: 00005a7b Pkts: 441752 Err: 4133 PPS: 29450
Drop %: 0.93% Eth3RXInt: 246594 Eth2TXInt: 80314
01/14/05 03:50:00 entries: 00046892 Pkts: 403034 Err: 1065 PPS: 26868
Drop %: 0.26% Eth3RXInt: 274784 Eth2TXInt: 81157
01/14/05 03:50:15 entries: 000476c7 Pkts: 435578 Err: 729 PPS: 29038
Drop %: 0.16% Eth3RXInt: 292309 Eth2TXInt: 81034
01/14/05 03:50:30 entries: 000486aa Pkts: 412410 Err: 828 PPS: 27494
Drop %: 0.20% Eth3RXInt: 278210 Eth2TXInt: 81402
01/14/05 03:50:45 entries: 000495b6 Pkts: 416286 Err: 1004 PPS: 27752
Drop %: 0.24% Eth3RXInt: 281510 Eth2TXInt: 81008
01/14/05 04:00:03 entries: 0006a815 Pkts: 475366 Err: 1129 PPS: 31691
Drop %: 0.23% Eth3RXInt: 307887 Eth2TXInt: 81480
01/14/05 04:00:18 entries: 0006b54b Pkts: 474234 Err: 1045 PPS: 31615
Drop %: 0.22% Eth3RXInt: 307872 Eth2TXInt: 81418
01/14/05 04:00:34 entries: 0006c1b3 Pkts: 468519 Err: 920 PPS: 31234
Drop %: 0.19% Eth3RXInt: 309532 Eth2TXInt: 81400
01/14/05 04:00:49 entries: 0006cec6 Pkts: 427386 Err: 701 PPS: 28492
Drop %: 0.16% Eth3RXInt: 288788 Eth2TXInt: 80951
01/14/05 04:10:07 entries: 0008b49a Pkts: 450361 Err: 0 PPS: 30024
Drop %: 0.0% Eth3RXInt: 298760 Eth2TXInt: 82346
01/14/05 04:10:22 entries: 0008ba59 Pkts: 459431 Err: 0 PPS: 30628
Drop %: 0.0% Eth3RXInt: 301596 Eth2TXInt: 81793
01/14/05 04:10:37 entries: 0008c017 Pkts: 450040 Err: 0 PPS: 30002
Drop %: 0.0% Eth3RXInt: 298585 Eth2TXInt: 81996
01/14/05 04:10:52 entries: 0008c2e9 Pkts: 454740 Err: 963 PPS: 30316
Drop %: 0.21% Eth3RXInt: 299204 Eth2TXInt: 82428
01/14/05 04:20:11 entries: 0009746e Pkts: 417841 Err: 1077 PPS: 27856
Drop %: 0.25% Eth3RXInt: 285292 Eth2TXInt: 81971
01/14/05 04:20:26 entries: 00097988 Pkts: 444129 Err: 834 PPS: 29608
Drop %: 0.18% Eth3RXInt: 294925 Eth2TXInt: 82172
01/14/05 04:20:41 entries: 00097e6c Pkts: 440004 Err: 1152 PPS: 29333
Drop %: 0.26% Eth3RXInt: 298454 Eth2TXInt: 82420
01/14/05 04:20:56 entries: 000982c8 Pkts: 483248 Err: 1265 PPS: 32216
Drop %: 0.26% Eth3RXInt: 312325 Eth2TXInt: 80981
01/14/05 04:30:15 entries: 000a66f5 Pkts: 455322 Err: 1273 PPS: 30354
Drop %: 0.27% Eth3RXInt: 302449 Eth2TXInt: 82011
01/14/05 04:30:30 entries: 000a6c4f Pkts: 443848 Err: 938 PPS: 29589
Drop %: 0.21% Eth3RXInt: 296296 Eth2TXInt: 82826
01/14/05 04:30:45 entries: 000a714d Pkts: 457830 Err: 906 PPS: 30522
Drop %: 0.19% Eth3RXInt: 303239 Eth2TXInt: 82669
01/14/05 04:40:03 entries: 000b3c19 Pkts: 443891 Err: 1363 PPS: 29592
Drop %: 0.30% Eth3RXInt: 298216 Eth2TXInt: 81958
01/14/05 04:40:18 entries: 000b4131 Pkts: 436785 Err: 1055 PPS: 29119
Drop %: 0.24% Eth3RXInt: 293404 Eth2TXInt: 81866
01/14/05 04:40:33 entries: 000b4670 Pkts: 418272 Err: 886 PPS: 27884
Drop %: 0.21% Eth3RXInt: 284868 Eth2TXInt: 82277
01/14/05 04:40:48 entries: 00003a13 Pkts: 433659 Err: 5106 PPS: 28910
Drop %: 1.17% Eth3RXInt: 254964 Eth2TXInt: 80964
01/14/05 04:50:07 entries: 000487b1 Pkts: 426580 Err: 825 PPS: 28438
Drop %: 0.19% Eth3RXInt: 284115 Eth2TXInt: 81550
01/14/05 04:50:22 entries: 000499d9 Pkts: 429040 Err: 782 PPS: 28602
Drop %: 0.18% Eth3RXInt: 286171 Eth2TXInt: 81600
01/14/05 04:50:37 entries: 0004ab30 Pkts: 425725 Err: 965 PPS: 28381
Drop %: 0.22% Eth3RXInt: 287211 Eth2TXInt: 81419
01/14/05 04:50:52 entries: 0004bc1b Pkts: 453751 Err: 1116 PPS: 30250
Drop %: 0.24% Eth3RXInt: 296099 Eth2TXInt: 81084
01/14/05 05:00:11 entries: 000744da Pkts: 515817 Err: 1322 PPS: 34387
Drop %: 0.25% Eth3RXInt: 319462 Eth2TXInt: 81837
01/14/05 05:00:26 entries: 00075866 Pkts: 524724 Err: 1613 PPS: 34981
Drop %: 0.30% Eth3RXInt: 320024 Eth2TXInt: 81871
01/14/05 05:00:41 entries: 000768aa Pkts: 513511 Err: 1273 PPS: 34234
Drop %: 0.24% Eth3RXInt: 319514 Eth2TXInt: 81763
01/14/05 05:00:56 entries: 00077652 Pkts: 521796 Err: 1088 PPS: 34786
Drop %: 0.20% Eth3RXInt: 322121 Eth2TXInt: 81846
01/14/05 05:10:14 entries: 0008bcd7 Pkts: 441778 Err: 1271 PPS: 29451
Drop %: 0.28% Eth3RXInt: 293564 Eth2TXInt: 81867
01/14/05 05:10:29 entries: 0008c48f Pkts: 457062 Err: 1177 PPS: 30470
Drop %: 0.25% Eth3RXInt: 301042 Eth2TXInt: 82350
01/14/05 05:10:45 entries: 0008cb53 Pkts: 459844 Err: 940 PPS: 30656
Drop %: 0.20% Eth3RXInt: 303190 Eth2TXInt: 82519
01/14/05 05:20:03 entries: 0009b445 Pkts: 478691 Err: 889 PPS: 31912
Drop %: 0.18% Eth3RXInt: 313378 Eth2TXInt: 82914
01/14/05 05:20:18 entries: 0009ba72 Pkts: 485207 Err: 960 PPS: 32347
Drop %: 0.19% Eth3RXInt: 316409 Eth2TXInt: 82264
01/14/05 05:20:33 entries: 0009c150 Pkts: 486224 Err: 879 PPS: 32414
Drop %: 0.18% Eth3RXInt: 314240 Eth2TXInt: 81731
01/14/05 05:20:48 entries: 0009c35b Pkts: 449497 Err: 1155 PPS: 29966
Drop %: 0.25% Eth3RXInt: 299010 Eth2TXInt: 82788
01/14/05 05:30:07 entries: 000ab4ea Pkts: 485299 Err: 1095 PPS: 32353
Drop %: 0.22% Eth3RXInt: 314318 Eth2TXInt: 82271
01/14/05 05:30:22 entries: 000abae8 Pkts: 490269 Err: 797 PPS: 32684
Drop %: 0.16% Eth3RXInt: 317205 Eth2TXInt: 82521
01/14/05 05:30:37 entries: 000ac134 Pkts: 496198 Err: 1132 PPS: 33079
Drop %: 0.22% Eth3RXInt: 318933 Eth2TXInt: 81960
01/14/05 05:30:52 entries: 000ac2a1 Pkts: 501254 Err: 954 PPS: 33416
Drop %: 0.19% Eth3RXInt: 324082 Eth2TXInt: 83014
01/14/05 05:40:11 entries: 000baeeb Pkts: 593611 Err: 1357 PPS: 39574
Drop %: 0.22% Eth3RXInt: 360758 Eth2TXInt: 81969
01/14/05 05:40:26 entries: 000bb507 Pkts: 567372 Err: 1385 PPS: 37824
Drop %: 0.24% Eth3RXInt: 350017 Eth2TXInt: 82295
01/14/05 05:40:41 entries: 000bbb4c Pkts: 495626 Err: 1517 PPS: 33041
Drop %: 0.30% Eth3RXInt: 320385 Eth2TXInt: 81689
01/14/05 05:40:56 entries: 00006b5a Pkts: 488706 Err: 6440 PPS: 32580
Drop %: 1.31% Eth3RXInt: 247798 Eth2TXInt: 80410
01/14/05 05:50:14 entries: 0004e410 Pkts: 500546 Err: 1278 PPS: 33369
Drop %: 0.25% Eth3RXInt: 310059 Eth2TXInt: 80386
01/14/05 05:50:29 entries: 0004f8e5 Pkts: 506876 Err: 1275 PPS: 33791
Drop %: 0.25% Eth3RXInt: 316871 Eth2TXInt: 81128
01/14/05 05:50:45 entries: 00050dc1 Pkts: 510450 Err: 1152 PPS: 34030
Drop %: 0.22% Eth3RXInt: 320390 Eth2TXInt: 80666
01/14/05 06:00:03 entries: 0008089f Pkts: 519631 Err: 1531 PPS: 34642
Drop %: 0.29% Eth3RXInt: 323648 Eth2TXInt: 79945
01/14/05 06:00:18 entries: 00081247 Pkts: 535934 Err: 1163 PPS: 35728
Drop %: 0.21% Eth3RXInt: 327138 Eth2TXInt: 79790
01/14/05 06:00:33 entries: 00081bb3 Pkts: 540473 Err: 1305 PPS: 36031
Drop %: 0.24% Eth3RXInt: 332823 Eth2TXInt: 79750
01/14/05 06:00:48 entries: 00082320 Pkts: 541275 Err: 1622 PPS: 36085
Drop %: 0.29% Eth3RXInt: 334946 Eth2TXInt: 79207
01/14/05 06:10:07 entries: 00095795 Pkts: 559209 Err: 1351 PPS: 37280
Drop %: 0.24% Eth3RXInt: 336161 Eth2TXInt: 79479
01/14/05 06:10:22 entries: 000963c0 Pkts: 540560 Err: 1095 PPS: 36037
Drop %: 0.20% Eth3RXInt: 323433 Eth2TXInt: 79487
01/14/05 06:10:37 entries: 00096fa6 Pkts: 559203 Err: 1492 PPS: 37280
Drop %: 0.26% Eth3RXInt: 330691 Eth2TXInt: 80035
01/14/05 06:10:52 entries: 000979fb Pkts: 542626 Err: 1438 PPS: 36175
Drop %: 0.26% Eth3RXInt: 325674 Eth2TXInt: 79483
01/14/05 06:20:11 entries: 000ad185 Pkts: 537524 Err: 1585 PPS: 35834
Drop %: 0.29% Eth3RXInt: 329987 Eth2TXInt: 79144
01/14/05 06:20:26 entries: 000adaba Pkts: 513026 Err: 1035 PPS: 34201
Drop %: 0.20% Eth3RXInt: 317912 Eth2TXInt: 79128
01/14/05 06:20:41 entries: 000ae400 Pkts: 532172 Err: 1700 PPS: 35478
Drop %: 0.31% Eth3RXInt: 326356 Eth2TXInt: 79715
01/14/05 06:20:56 entries: 000aeb41 Pkts: 580261 Err: 1658 PPS: 38684
Drop %: 0.28% Eth3RXInt: 346806 Eth2TXInt: 79013
01/14/05 06:30:00 entries: 000c1594 Pkts: 572401 Err: 1903 PPS: 38160
Drop %: 0.33% Eth3RXInt: 341422 Eth2TXInt: 78273
01/14/05 06:30:15 entries: 000c1ea7 Pkts: 585850 Err: 1822 PPS: 39056
Drop %: 0.31% Eth3RXInt: 345728 Eth2TXInt: 77805
01/14/05 06:30:30 entries: 000c27b2 Pkts: 594077 Err: 1693 PPS: 39605
Drop %: 0.28% Eth3RXInt: 348358 Eth2TXInt: 76571
01/14/05 06:30:45 entries: 000c3123 Pkts: 582267 Err: 1706 PPS: 38817
Drop %: 0.29% Eth3RXInt: 342533 Eth2TXInt: 77763
01/14/05 06:40:04 entries: 000dfb48 Pkts: 710678 Err: 2075 PPS: 47378
Drop %: 0.29% Eth3RXInt: 377850 Eth2TXInt: 70656
01/14/05 06:40:19 entries: 000e083c Pkts: 660634 Err: 1777 PPS: 44042
Drop %: 0.26% Eth3RXInt: 353285 Eth2TXInt: 71353
01/14/05 06:40:34 entries: 000e131e Pkts: 683136 Err: 1696 PPS: 45542
Drop %: 0.24% Eth3RXInt: 369381 Eth2TXInt: 72622
01/14/05 06:40:49 entries: 00005c1d Pkts: 713262 Err: 36148 PPS: 47550
Drop %: 50.6% Eth3RXInt: 286910 Eth2TXInt: 71729
01/14/05 06:50:08 entries: 0006066c Pkts: 695785 Err: 1970 PPS: 46385
Drop %: 0.28% Eth3RXInt: 368369 Eth2TXInt: 73034
01/14/05 06:50:23 entries: 0006209f Pkts: 705028 Err: 1715 PPS: 47001
Drop %: 0.24% Eth3RXInt: 368574 Eth2TXInt: 72892
01/14/05 06:50:38 entries: 000639ca Pkts: 702841 Err: 1706 PPS: 46856
Drop %: 0.24% Eth3RXInt: 370024 Eth2TXInt: 72876
01/14/05 06:50:54 entries: 000651d2 Pkts: 726747 Err: 2020 PPS: 48449
Drop %: 0.27% Eth3RXInt: 376832 Eth2TXInt: 71366
01/14/05 07:00:13 entries: 0008f249 Pkts: 677362 Err: 1595 PPS: 45157
Drop %: 0.23% Eth3RXInt: 370724 Eth2TXInt: 69981
01/14/05 07:00:28 entries: 0008fe96 Pkts: 662406 Err: 1423 PPS: 44160
Drop %: 0.21% Eth3RXInt: 362967 Eth2TXInt: 72611
01/14/05 07:00:43 entries: 00090c13 Pkts: 662222 Err: 1844 PPS: 44148
Drop %: 0.27% Eth3RXInt: 359869 Eth2TXInt: 72181
01/14/05 07:00:58 entries: 00091789 Pkts: 656859 Err: 1729 PPS: 43790
Drop %: 0.26% Eth3RXInt: 358838 Eth2TXInt: 70074
01/14/05 07:10:02 entries: 000abdb6 Pkts: 697539 Err: 2042 PPS: 46502
Drop %: 0.29% Eth3RXInt: 379747 Eth2TXInt: 70269
01/14/05 07:10:17 entries: 000ac9d9 Pkts: 704847 Err: 1847 PPS: 46989
Drop %: 0.26% Eth3RXInt: 374670 Eth2TXInt: 69898
01/14/05 07:10:32 entries: 000ad57b Pkts: 703755 Err: 2090 PPS: 46917
Drop %: 0.29% Eth3RXInt: 377955 Eth2TXInt: 70850
01/14/05 07:10:47 entries: 000adebf Pkts: 709598 Err: 1790 PPS: 47306
Drop %: 0.25% Eth3RXInt: 379562 Eth2TXInt: 69334
01/14/05 07:20:06 entries: 000c9ceb Pkts: 790839 Err: 2372 PPS: 52722
Drop %: 0.29% Eth3RXInt: 392123 Eth2TXInt: 63101
01/14/05 07:20:21 entries: 000caa53 Pkts: 815293 Err: 2200 PPS: 54352
Drop %: 0.26% Eth3RXInt: 391243 Eth2TXInt: 63859
01/14/05 07:20:37 entries: 000cb70a Pkts: 801060 Err: 2421 PPS: 53404
Drop %: 0.30% Eth3RXInt: 392279 Eth2TXInt: 63283
01/14/05 07:20:52 entries: 000cc107 Pkts: 816395 Err: 2279 PPS: 54426
Drop %: 0.27% Eth3RXInt: 390125 Eth2TXInt: 63646
01/14/05 07:30:11 entries: 000e682c Pkts: 817455 Err: 2501 PPS: 54497
Drop %: 0.30% Eth3RXInt: 396197 Eth2TXInt: 63137
01/14/05 07:30:26 entries: 000e756b Pkts: 820908 Err: 2438 PPS: 54727
Drop %: 0.29% Eth3RXInt: 391849 Eth2TXInt: 63660
01/14/05 07:30:41 entries: 000e8389 Pkts: 816010 Err: 2613 PPS: 54400
Drop %: 0.32% Eth3RXInt: 392971 Eth2TXInt: 64416
01/14/05 07:30:56 entries: 000e8d18 Pkts: 827390 Err: 2536 PPS: 55159
Drop %: 0.30% Eth3RXInt: 394834 Eth2TXInt: 61765
01/14/05 07:40:01 entries: 0010436e Pkts: 863423 Err: 2368 PPS: 57561
Drop %: 0.27% Eth3RXInt: 405190 Eth2TXInt: 60998
01/14/05 07:40:16 entries: 00104fda Pkts: 858511 Err: 2494 PPS: 57234
Drop %: 0.29% Eth3RXInt: 403185 Eth2TXInt: 61815
01/14/05 07:40:31 entries: 00105c15 Pkts: 854565 Err: 2470 PPS: 56971
Drop %: 0.28% Eth3RXInt: 404894 Eth2TXInt: 61287
01/14/05 07:40:46 entries: 000057a0 Pkts: 826768 Err: 113636 PPS: 55117
Drop %: 13.74% Eth3RXInt: 302767 Eth2TXInt: 65567
01/14/05 07:50:06 entries: 00079684 Pkts: 913524 Err: 2245 PPS: 60901
Drop %: 0.24% Eth3RXInt: 400066 Eth2TXInt: 58282
01/14/05 07:50:21 entries: 0007b783 Pkts: 864505 Err: 2505 PPS: 57633
Drop %: 0.28% Eth3RXInt: 393922 Eth2TXInt: 61211
01/14/05 07:50:36 entries: 0007d8a8 Pkts: 883376 Err: 2693 PPS: 58891
Drop %: 0.30% Eth3RXInt: 397785 Eth2TXInt: 61160
01/14/05 07:50:51 entries: 0007f40c Pkts: 858829 Err: 2394 PPS: 57255
Drop %: 0.27% Eth3RXInt: 398098 Eth2TXInt: 60954
01/14/05 08:00:11 entries: 000a460b Pkts: 1000645 Err: 2906 PPS: 66709
Drop %: 0.29% Eth3RXInt: 406845 Eth2TXInt: 53497
01/14/05 08:00:26 entries: 000a55a7 Pkts: 965824 Err: 3062 PPS: 64388
Drop %: 0.31% Eth3RXInt: 402766 Eth2TXInt: 54223
01/14/05 08:00:41 entries: 000a6548 Pkts: 975763 Err: 3184 PPS: 65050
Drop %: 0.32% Eth3RXInt: 408337 Eth2TXInt: 55173
01/14/05 08:00:56 entries: 000a726e Pkts: 963100 Err: 3105 PPS: 64206
Drop %: 0.32% Eth3RXInt: 409883 Eth2TXInt: 54808
01/14/05 08:10:01 entries: 000c7fc9 Pkts: 1045671 Err: 2629 PPS: 69711
Drop %: 0.25% Eth3RXInt: 412912 Eth2TXInt: 52786
01/14/05 08:10:16 entries: 000c8dd3 Pkts: 1059809 Err: 3206 PPS: 70653
Drop %: 0.30% Eth3RXInt: 414254 Eth2TXInt: 51833
01/14/05 08:10:32 entries: 000c9c1a Pkts: 1026487 Err: 3277 PPS: 68432
Drop %: 0.31% Eth3RXInt: 414313 Eth2TXInt: 51822
01/14/05 08:10:47 entries: 000ca57f Pkts: 1026565 Err: 3149 PPS: 68437
Drop %: 0.30% Eth3RXInt: 412748 Eth2TXInt: 51581
01/14/05 08:20:07 entries: 000eb744 Pkts: 1086031 Err: 3287 PPS: 72402
Drop %: 0.30% Eth3RXInt: 406328 Eth2TXInt: 48924
01/14/05 08:20:22 entries: 000ec65d Pkts: 1066622 Err: 2733 PPS: 71108
Drop %: 0.25% Eth3RXInt: 408729 Eth2TXInt: 49285
01/14/05 08:20:37 entries: 000ed550 Pkts: 1069185 Err: 2919 PPS: 71279
Drop %: 0.27% Eth3RXInt: 409380 Eth2TXInt: 49829
01/14/05 08:20:52 entries: 000ede68 Pkts: 1098310 Err: 3181 PPS: 73220
Drop %: 0.28% Eth3RXInt: 414812 Eth2TXInt: 50289
01/14/05 08:30:12 entries: 0010ba04 Pkts: 1132309 Err: 3248 PPS: 75487
Drop %: 0.28% Eth3RXInt: 414595 Eth2TXInt: 45562
01/14/05 08:30:27 entries: 0010c986 Pkts: 1154479 Err: 3665 PPS: 76965
Drop %: 0.31% Eth3RXInt: 404918 Eth2TXInt: 45873
01/14/05 08:30:43 entries: 0010d8e5 Pkts: 1192416 Err: 4128 PPS: 79494
Drop %: 0.34% Eth3RXInt: 405126 Eth2TXInt: 43819
01/14/05 08:30:58 entries: 0010e633 Pkts: 1166668 Err: 3361 PPS: 77777
Drop %: 0.28% Eth3RXInt: 415004 Eth2TXInt: 44711
01/14/05 08:40:03 entries: 0012aad0 Pkts: 1138575 Err: 3211 PPS: 75905
Drop %: 0.28% Eth3RXInt: 373013 Eth2TXInt: 47718
01/14/05 08:40:18 entries: 0012c2e3 Pkts: 1158909 Err: 3481 PPS: 77260
Drop %: 0.30% Eth3RXInt: 366664 Eth2TXInt: 45401
01/14/05 08:40:33 entries: 0012d5fb Pkts: 1155017 Err: 3674 PPS: 77001
Drop %: 0.31% Eth3RXInt: 389075 Eth2TXInt: 46785
01/14/05 08:40:48 entries: 00008aee Pkts: 1033983 Err: 174485 PPS:
68932 Drop %: 16.87% Eth3RXInt: 246160 Eth2TXInt: 60997
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-13 22:27 ` Jeremy M. Guthrie
@ 2005-01-14 15:44 ` Robert Olsson
0 siblings, 0 replies; 88+ messages in thread
From: Robert Olsson @ 2005-01-14 15:44 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
>
> IN:Hits are cache hits yes? Tot, are the total number of flows created since
> we last looked at the total flow count, correct?
Thats correct.
> What would cause a packet to drop in the network stack and thus showup in
> /proc/net/softnet_stat?
Only netif_rx (non-NAPI) driver drops in the backlog. NAPI drivers drops early
in device.
> Performance has picked up. I am not dropping packets anymore except during
> GC. I upped my interval from 600 seconds to 1800 seconds.
Yes and as far as I see you have CPU left the as RX interrupts on eth3 indicates
(if the patch is applied) but also the low numbers of time squeeze in sofnet_stat
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-14 14:59 ` Jeremy M. Guthrie
@ 2005-01-14 16:05 ` Robert Olsson
2005-01-14 19:00 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-14 16:05 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
> I ran a script overnight using the modified driver you had given me Robert.
> it is interesting that there is almost always errors in the interface even
> though we aren't getting dst-cache errors and running ~ 40% free CPU now. I
> am going to switch over to Jesse's driver to see if his instrumentation helps
> nail down where the problem is.
> rx_packets: 2722676103
> tx_packets: 5
> rx_bytes: 1171335471
> tx_bytes: 398
> rx_errors: 8558366
> tx_errors: 0
> rx_dropped: 1951692
It might come from be periodic work from GC process correlate drops w. rtatat.
I think the GC process can be made more smooth but studies and experimentation
is probably needed as GC process is quite complex. Maybe some looked into this
already?
Also you reported less drops when you increased the size of the RX ring and
I see higher system performance with smaller RX rings. Both statements may
actually be true. Big rings size may buffer during periodic work as GC.
Also I have an experimental patch so you route without the route hash as a
comparison. You have to be brave...
BTW we had this thread going for week,
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-14 16:05 ` Robert Olsson
@ 2005-01-14 19:00 ` Jeremy M. Guthrie
2005-01-14 19:26 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-14 19:00 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 2686 bytes --]
These stats are from the driver Jesse instrumented.
ethtool -S eth3 | egrep -v ": 0"
NIC statistics:
rx_packets: 1130958533
tx_packets: 5
rx_bytes: 2373643298
tx_bytes: 398
rx_errors: 4388190
rx_dropped: 1486253
rx_fifo_errors: 2901937
rx_missed_errors: 2901937
rx_long_byte_count: 582194228258
rx_csum_offload_good: 1040376597
int_tx_desc: 4
int_tx_queueempty: 5
int_link_state: 1
int_rx_desc_min_thresh: 20704
int_rx_fifo_ovr: 1208
int_rx_timer: 331925913
int_rxcfg: 1
rx_csum_offload_errors: 325045
I am seeing more times w/ his driver where I run at zero errors. Of the last
877 samples since I switched drivers, I see 231 samples w/ zero errors.
On Friday 14 January 2005 10:05 am, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > I ran a script overnight using the modified driver you had given me
> > Robert. it is interesting that there is almost always errors in the
> > interface even though we aren't getting dst-cache errors and running ~
> > 40% free CPU now. I am going to switch over to Jesse's driver to see if
> > his instrumentation helps nail down where the problem is.
> >
> >
> > rx_packets: 2722676103
> > tx_packets: 5
> > rx_bytes: 1171335471
> > tx_bytes: 398
> > rx_errors: 8558366
> > tx_errors: 0
> > rx_dropped: 1951692
>
> It might come from be periodic work from GC process correlate drops w.
> rtatat.
>
> I think the GC process can be made more smooth but studies and
> experimentation is probably needed as GC process is quite complex. Maybe
> some looked into this already?
> Also you reported less drops when you increased the size of the RX ring
> and I see higher system performance with smaller RX rings. Both statements
> may actually be true. Big rings size may buffer during periodic work as GC.
I am running w/ 2048 input buffers. I am going to increase to 10K and try
again.
> Also I have an experimental patch so you route without the route hash as a
> comparison. You have to be brave...
I am about 300 miles from the machine this week and next week though I might
be able to try it this weekend while I am home.
> BTW we had this thread going for week,
Pardon me for being a bit naive but I am not understanding this last comment.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-14 19:00 ` Jeremy M. Guthrie
@ 2005-01-14 19:26 ` Jeremy M. Guthrie
2005-01-16 12:32 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-14 19:26 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 3208 bytes --]
I actually upped the buffer count to 8192 buffers instead of 10k. Of the 74
samples I have thus far, 57 have been clean of errors. Most of the sample
errors appear to be shortly after the cache flush.
I am going to let the buffers run for a bit longer then I have to go. I have
a five hour drive ahead of me.
On Friday 14 January 2005 01:00 pm, Jeremy M. Guthrie wrote:
> These stats are from the driver Jesse instrumented.
> ethtool -S eth3 | egrep -v ": 0"
> NIC statistics:
> rx_packets: 1130958533
> tx_packets: 5
> rx_bytes: 2373643298
> tx_bytes: 398
> rx_errors: 4388190
> rx_dropped: 1486253
> rx_fifo_errors: 2901937
> rx_missed_errors: 2901937
> rx_long_byte_count: 582194228258
> rx_csum_offload_good: 1040376597
> int_tx_desc: 4
> int_tx_queueempty: 5
> int_link_state: 1
> int_rx_desc_min_thresh: 20704
> int_rx_fifo_ovr: 1208
> int_rx_timer: 331925913
> int_rxcfg: 1
> rx_csum_offload_errors: 325045
>
> I am seeing more times w/ his driver where I run at zero errors. Of the
> last 877 samples since I switched drivers, I see 231 samples w/ zero
> errors.
>
> On Friday 14 January 2005 10:05 am, Robert Olsson wrote:
> > Jeremy M. Guthrie writes:
> > > I ran a script overnight using the modified driver you had given me
> > > Robert. it is interesting that there is almost always errors in the
> > > interface even though we aren't getting dst-cache errors and running ~
> > > 40% free CPU now. I am going to switch over to Jesse's driver to see
> > > if his instrumentation helps nail down where the problem is.
> > >
> > >
> > > rx_packets: 2722676103
> > > tx_packets: 5
> > > rx_bytes: 1171335471
> > > tx_bytes: 398
> > > rx_errors: 8558366
> > > tx_errors: 0
> > > rx_dropped: 1951692
> >
> > It might come from be periodic work from GC process correlate drops w.
> > rtatat.
> >
> > I think the GC process can be made more smooth but studies and
> > experimentation is probably needed as GC process is quite complex. Maybe
> > some looked into this already?
> > Also you reported less drops when you increased the size of the RX ring
> > and I see higher system performance with smaller RX rings. Both
> > statements may actually be true. Big rings size may buffer during
> > periodic work as GC.
>
> I am running w/ 2048 input buffers. I am going to increase to 10K and try
> again.
>
> > Also I have an experimental patch so you route without the route hash as
> > a comparison. You have to be brave...
>
> I am about 300 miles from the machine this week and next week though I
> might be able to try it this weekend while I am home.
>
> > BTW we had this thread going for week,
>
> Pardon me for being a bit naive but I am not understanding this last
> comment.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-14 19:26 ` Jeremy M. Guthrie
@ 2005-01-16 12:32 ` Robert Olsson
2005-01-16 16:22 ` Jeremy M. Guthrie
2005-01-19 15:03 ` Jeremy M. Guthrie
0 siblings, 2 replies; 88+ messages in thread
From: Robert Olsson @ 2005-01-16 12:32 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
> I actually upped the buffer count to 8192 buffers instead of 10k.
> Of the 74 samples I have thus far, 57 have been clean of errors.
> Most of the sample errors appear to be shortly after the cache flush.
I don't really believe in increasing RX buffers to this extent. We verified
that you have CPU available and the drops occur when the timer based GC
happens. Increasing buffers decreases overall performance and adds jitter.
We saw also the timed based GC were taking the dst-entries from about
600k to 40k in one shot. I think this what we should look into. Just
GC is "work" also after GC a lot flows has to be recreated doing fib
lookup and creating new entries. We want to smoothen the GC process so
happen more frequent and does less work.
Some time ago an "in-flow" GC (as opposed to timer based) was added to
the routing code look for cand in route.c. In setup like yours (and ours)
it would be better to relay on this process to a higher extent. Anyway
in /proc/sys/net/ipv4/route/ you have the files.
gc_elasticity, gc_interval, gc_thresh etc I would avoid gc_min_interval.
And you can play with your running system and for drops without causing
your users to much pain.
We save the patch for routing without route hash and GC until later,
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-16 12:32 ` Robert Olsson
@ 2005-01-16 16:22 ` Jeremy M. Guthrie
2005-01-19 15:03 ` Jeremy M. Guthrie
1 sibling, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-16 16:22 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 2373 bytes --]
On Sunday 16 January 2005 06:32 am, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > I actually upped the buffer count to 8192 buffers instead of 10k.
> > Of the 74 samples I have thus far, 57 have been clean of errors.
> > Most of the sample errors appear to be shortly after the cache flush.
>
> I don't really believe in increasing RX buffers to this extent. We
> verified that you have CPU available and the drops occur when the timer
> based GC happens. Increasing buffers decreases overall performance and adds
> jitter.
I just took a look at my logs, the increase to the max on the card of 4K RX
buffers has stopped packet drops except during GC. I agree, it isn't pretty
and it is not the solution I would like. In the mean time, it has at least
stopped the 0.3% round-the-clock packet loss which I was seeing even at rates
as low as 25K pps.
> We saw also the timed based GC were taking the dst-entries from about
> 600k to 40k in one shot. I think this what we should look into. Just
> GC is "work" also after GC a lot flows has to be recreated doing fib
> lookup and creating new entries. We want to smoothen the GC process so
> happen more frequent and does less work.
Agreed. I went to the extreme because I can really see the % idle CPU. If I
am constantly setting up new flows then the % of free CPU shoots way down.
Minus the effects of GC, a larger flow table equates into free CPU.
> Some time ago an "in-flow" GC (as opposed to timer based) was added to
> the routing code look for cand in route.c. In setup like yours (and ours)
> it would be better to relay on this process to a higher extent. Anyway
> in /proc/sys/net/ipv4/route/ you have the files.
> gc_elasticity, gc_interval, gc_thresh etc I would avoid gc_min_interval.
> And you can play with your running system and for drops without causing
> your users to much pain.
> We save the patch for routing without route hash and GC until later,
Okay. I will bring my interval down from an hour down to ten minutes and do
further tuning.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-16 12:32 ` Robert Olsson
2005-01-16 16:22 ` Jeremy M. Guthrie
@ 2005-01-19 15:03 ` Jeremy M. Guthrie
2005-01-19 22:18 ` Robert Olsson
1 sibling, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-19 15:03 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 1026 bytes --]
On Sunday 16 January 2005 06:32 am, Robert Olsson wrote:
> Some time ago an "in-flow" GC (as opposed to timer based) was added to
> the routing code look for cand in route.c. In setup like yours (and ours)
> it would be better to relay on this process to a higher extent. Anyway
> in /proc/sys/net/ipv4/route/ you have the files.
>
> gc_elasticity, gc_interval, gc_thresh etc I would avoid gc_min_interval.
>
> And you can play with your running system and for drops without causing
> your users to much pain.
I have done a little tweaking. I now hold at around 520K routes in the hash.
I still drop packets every secret_interval but I've upped that counter so I
don't whack all of my hash entries all that often.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-19 15:03 ` Jeremy M. Guthrie
@ 2005-01-19 22:18 ` Robert Olsson
2005-01-20 1:50 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-19 22:18 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
> > gc_elasticity, gc_interval, gc_thresh etc I would avoid gc_min_interval.
> I have done a little tweaking. I now hold at around 520K routes in the hash.
> I still drop packets every secret_interval but I've upped that counter so I
> don't whack all of my hash entries all that often.
Sounds you done progress.
This with relative conservative setting if RX-buffers?
secret_interval needs special care as it flushes the cache totally.
What did you tweak?
Your output from rtstat will be very interesting.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-19 22:18 ` Robert Olsson
@ 2005-01-20 1:50 ` Jeremy M. Guthrie
2005-01-20 11:30 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-20 1:50 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
@ 14:51:44 we can see the secret_interval kick in. Otherwise we run very
solid.
101Kpps 8)
1024 buffers on the RX of eth3.
echo 86400 > /proc/sys/net/ipv4/route/secret_interval
echo 524288 > /proc/sys/net/ipv4/route/gc_thresh
It appears to be cruising right along now.
-----------
01/19/05 14:49:57 entries: 0007fffa Pkts: 1413598 Err: 0 PPS: 94239
Drop %: 0.0% Eth3RXInt: 309669 Eth2TXInt: 36693
01/19/05 14:50:13 entries: 0007fff7 Pkts: 1448829 Err: 0 PPS: 96588
Drop %: 0.0% Eth3RXInt: 295522 Eth2TXInt: 36475
01/19/05 14:50:28 entries: 0007ffff Pkts: 1449774 Err: 0 PPS: 96651
Drop %: 0.0% Eth3RXInt: 292556 Eth2TXInt: 36780
01/19/05 14:50:43 entries: 00080000 Pkts: 1454807 Err: 0 PPS: 96987
Drop %: 0.0% Eth3RXInt: 300945 Eth2TXInt: 35358
01/19/05 14:50:58 entries: 00080000 Pkts: 1426412 Err: 0 PPS: 95094
Drop %: 0.0% Eth3RXInt: 303005 Eth2TXInt: 35118
01/19/05 14:51:14 entries: 00080002 Pkts: 1471279 Err: 0 PPS: 98085
Drop %: 0.0% Eth3RXInt: 298152 Eth2TXInt: 35573
01/19/05 14:51:29 entries: 00080002 Pkts: 1402715 Err: 0 PPS: 93514
Drop %: 0.0% Eth3RXInt: 314880 Eth2TXInt: 37688
01/19/05 14:51:44 entries: 00007ef6 Pkts: 1335367 Err: 204660 PPS:
89024 Drop %: 15.32% Eth3RXInt: 203518 Eth2TXInt: 54907
01/19/05 14:52:00 entries: 00014a3a Pkts: 1484009 Err: 0 PPS: 98933
Drop %: 0.0% Eth3RXInt: 138942 Eth2TXInt: 60283
01/19/05 14:52:15 entries: 0001f092 Pkts: 1441979 Err: 0 PPS: 96131
Drop %: 0.0% Eth3RXInt: 175142 Eth2TXInt: 53552
01/19/05 14:52:30 entries: 00028f95 Pkts: 1460829 Err: 0 PPS: 97388
Drop %: 0.0% Eth3RXInt: 181189 Eth2TXInt: 53574
01/19/05 14:52:45 entries: 000303a3 Pkts: 1423530 Err: 0 PPS: 94902
Drop %: 0.0% Eth3RXInt: 234029 Eth2TXInt: 48046
01/19/05 14:53:01 entries: 00035167 Pkts: 1443491 Err: 0 PPS: 96232
Drop %: 0.0% Eth3RXInt: 270393 Eth2TXInt: 40791
01/19/05 14:53:16 entries: 0003994a Pkts: 1455118 Err: 0 PPS: 97007
Drop %: 0.0% Eth3RXInt: 276102 Eth2TXInt: 39695
01/19/05 14:53:31 entries: 0003df5a Pkts: 1405048 Err: 0 PPS: 93669
Drop %: 0.0% Eth3RXInt: 287641 Eth2TXInt: 39903
01/19/05 14:53:46 entries: 00041f5c Pkts: 1392438 Err: 0 PPS: 92829
Drop %: 0.0% Eth3RXInt: 307840 Eth2TXInt: 40216
01/19/05 14:54:01 entries: 00045da3 Pkts: 1486506 Err: 0 PPS: 99100
Drop %: 0.0% Eth3RXInt: 280067 Eth2TXInt: 38030
01/19/05 14:54:17 entries: 00049a78 Pkts: 1504340 Err: 0 PPS: 100289
Drop %: 0.0% Eth3RXInt: 276769 Eth2TXInt: 36982
01/19/05 14:54:32 entries: 0004d72b Pkts: 1489358 Err: 0 PPS: 99290
Drop %: 0.0% Eth3RXInt: 280576 Eth2TXInt: 37378
01/19/05 14:54:47 entries: 00051214 Pkts: 1475776 Err: 0 PPS: 98385
Drop %: 0.0% Eth3RXInt: 289098 Eth2TXInt: 36632
01/19/05 14:55:02 entries: 00054ceb Pkts: 1527761 Err: 0 PPS: 101850
Drop %: 0.0% Eth3RXInt: 273478 Eth2TXInt: 35447
01/19/05 14:55:18 entries: 000584f7 Pkts: 1482312 Err: 0 PPS: 98820
Drop %: 0.0% Eth3RXInt: 288181 Eth2TXInt: 35792
01/19/05 14:55:33 entries: 0005be56 Pkts: 1477823 Err: 0 PPS: 98521
Drop %: 0.0% Eth3RXInt: 284099 Eth2TXInt: 36698
01/19/05 14:55:48 entries: 0005f55b Pkts: 1489107 Err: 0 PPS: 99273
Drop %: 0.0% Eth3RXInt: 289669 Eth2TXInt: 36162
01/19/05 14:56:03 entries: 00062ae8 Pkts: 1451832 Err: 0 PPS: 96788
Drop %: 0.0% Eth3RXInt: 299549 Eth2TXInt: 36259
01/19/05 14:56:19 entries: 000660cd Pkts: 1423292 Err: 0 PPS: 94886
Drop %: 0.0% Eth3RXInt: 306119 Eth2TXInt: 36382
01/19/05 14:56:34 entries: 000697a1 Pkts: 1458930 Err: 0 PPS: 97262
Drop %: 0.0% Eth3RXInt: 302765 Eth2TXInt: 37878
On Wednesday 19 January 2005 04:18 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > > gc_elasticity, gc_interval, gc_thresh etc I would avoid
> > > gc_min_interval.
> >
> > I have done a little tweaking. I now hold at around 520K routes in the
> > hash. I still drop packets every secret_interval but I've upped that
> > counter so I don't whack all of my hash entries all that often.
>
> Sounds you done progress.
> This with relative conservative setting if RX-buffers?
>
> secret_interval needs special care as it flushes the cache totally.
>
> What did you tweak?
> Your output from rtstat will be very interesting.
>
>
> --ro
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-20 1:50 ` Jeremy M. Guthrie
@ 2005-01-20 11:30 ` Robert Olsson
2005-01-20 14:37 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-20 11:30 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
> @ 14:51:44 we can see the secret_interval kick in. Otherwise we run very
> solid.
>
> 101Kpps 8)
Seems traffic increaed a little bit too?
> 1024 buffers on the RX of eth3.
>
> echo 86400 > /proc/sys/net/ipv4/route/secret_interval
> echo 524288 > /proc/sys/net/ipv4/route/gc_thresh
rhash_entries?
> It appears to be cruising right along now.
Nice. rtstat's is still interesting so we can see number fib lookups and
linear search and GC dynamics etc.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-20 11:30 ` Robert Olsson
@ 2005-01-20 14:37 ` Jeremy M. Guthrie
2005-01-20 17:01 ` Robert Olsson
0 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-20 14:37 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 6859 bytes --]
On Thursday 20 January 2005 05:30 am, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > @ 14:51:44 we can see the secret_interval kick in. Otherwise we run
> > very solid.
> >
> > 101Kpps 8)
>
> Seems traffic increaed a little bit too?
>
> > 1024 buffers on the RX of eth3.
> >
> > echo 86400 > /proc/sys/net/ipv4/route/secret_interval
> > echo 524288 > /proc/sys/net/ipv4/route/gc_thresh
>
> rhash_entries?
I left that at 2.4 million.
> > It appears to be cruising right along now.
>
> Nice. rtstat's is still interesting so we can see number fib lookups and
> linear search and GC dynamics etc.
size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
GC: tot ignored goal_miss ovrf HASH: in_search out_search
524291 73651 805 0 0 0 0 0 5 0
537114631 537 0 0 0 45657 3
524291 74169 779 0 0 0 0 0 2 0
1610369017 516 0 0 0 43612 0
524287 74830 777 0 0 0 0 0 6 0
0 498 0 0 0 40504 1
524285 77458 949 0 0 0 0 0 4 0
0 619 0 0 0 42731 1
524273 75475 801 0 0 0 0 0 10 0
0 549 0 0 0 40084 7
524292 76740 902 0 0 0 0 0 5 0
0 574 0 0 0 41702 3
524295 76681 829 0 0 0 0 0 4 0
0 527 0 0 0 42328 2
524333 76522 756 0 0 0 0 0 9 1
0 432 0 0 0 43621 3
524286 80323 792 0 0 0 0 0 6 0
0 498 0 0 0 44619 2
524294 79409 758 0 0 0 0 0 6 0
0 508 0 0 0 45864 3
524288 80058 756 0 0 0 0 0 7 0
0 478 0 0 0 44711 5
524291 77128 794 0 0 0 0 0 8 0
0 516 0 0 0 42301 3
524288 77433 823 0 0 0 0 0 4 0
0 539 0 0 0 42357 0
524293 79423 761 0 0 0 0 0 7 0
0 514 0 0 0 43102 3
524293 80940 804 0 0 0 0 0 6 0
0 530 0 0 0 45457 3
524287 84864 813 0 0 0 0 0 3 0
0 532 0 0 0 44309 0
524286 78358 804 0 0 0 0 0 6 0
0 531 0 0 0 43669 2
524293 72760 717 0 0 0 0 0 2 0
0 480 0 0 0 41611 0
524287 72833 684 0 0 0 0 0 5 0
0 464 0 0 0 40571 1
524290 77308 726 0 0 0 0 0 6 0
0 486 0 0 0 43031 2
size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
GC: tot ignored goal_miss ovrf HASH: in_search out_search
524288 81205 746 0 0 0 0 0 5 0
537114631 512 0 0 0 42384 3
524291 81744 736 0 0 0 0 0 5 0
1610369017 494 0 0 0 43980 3
524285 85813 766 0 0 0 0 0 5 0
0 485 0 0 0 45385 3
524290 80609 810 0 0 0 0 0 5 0
0 547 0 0 0 42037 2
524281 75730 784 0 0 0 0 0 7 0
0 515 0 0 0 42966 3
524295 78139 805 0 0 0 0 0 6 0
0 540 0 0 0 43026 3
524292 77198 724 0 0 0 0 0 7 1
0 484 0 0 0 44583 1
524301 80243 802 0 0 0 0 0 7 0
0 524 0 0 0 43462 2
524286 78010 822 0 0 0 0 0 8 0
0 557 0 0 0 44684 4
524292 76739 814 0 0 0 0 0 11 1
0 539 0 0 0 45128 5
524288 79226 765 0 0 0 0 0 6 0
0 518 0 0 0 47706 1
524286 78367 757 0 0 0 0 0 8 1
0 486 0 0 0 44462 1
524292 79453 774 0 0 0 0 0 10 0
0 514 0 0 0 43612 5
524291 75541 749 0 0 0 0 0 10 0
0 492 0 0 0 43462 3
524289 77849 748 0 0 0 0 0 14 3
0 493 0 0 0 43390 7
524296 77628 773 0 0 0 0 0 8 0
0 505 0 0 0 43900 3
524295 79695 699 0 0 0 0 0 8 0
0 451 0 0 0 45529 4
524293 78484 770 0 0 0 0 0 7 0
0 339 0 0 0 44700 2
524298 79257 732 0 0 0 0 0 8 0
0 486 0 0 0 45880 4
524287 79434 749 0 0 0 0 0 12 0
0 496 0 0 0 45081 5
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-20 14:37 ` Jeremy M. Guthrie
@ 2005-01-20 17:01 ` Robert Olsson
2005-01-20 17:14 ` Jeremy M. Guthrie
0 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-20 17:01 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
> > > 1024 buffers on the RX of eth3.
> > >
> > > echo 86400 > /proc/sys/net/ipv4/route/secret_interval
> > > echo 524288 > /proc/sys/net/ipv4/route/gc_thresh
> >
> > rhash_entries?
> I left that at 2.4 million.
>
> > > It appears to be cruising right along now.
> size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
> GC: tot ignored goal_miss ovrf HASH: in_search out_search
> 524291 74169 779 0 0 0 0 0 2 0
> 1610369017 516 0 0 0 43612 0
> 524287 74830 777 0 0 0 0 0 6 0
> 0 498 0 0 0 40504 1
> 524285 77458 949 0 0 0 0 0 4 0
> 0 619 0 0 0 42731 1
Linear search is under control and number of dst entries very high but very
constant at the cost of calling GC a number of times second. But I don't
understand why we do not see any GC ignored. Did you ever write to gc_min_interval
in proc? Never seen rtstat's like this but it seems to do the job.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-20 17:01 ` Robert Olsson
@ 2005-01-20 17:14 ` Jeremy M. Guthrie
2005-01-20 21:53 ` Robert Olsson
` (2 more replies)
0 siblings, 3 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-20 17:14 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 2470 bytes --]
On Thursday 20 January 2005 11:01 am, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > > > 1024 buffers on the RX of eth3.
> > > >
> > > > echo 86400 > /proc/sys/net/ipv4/route/secret_interval
> > > > echo 524288 > /proc/sys/net/ipv4/route/gc_thresh
> > >
> > > rhash_entries?
> >
> > I left that at 2.4 million.
> >
> > > > It appears to be cruising right along now.
> >
> > size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot
> > mc GC: tot ignored goal_miss ovrf HASH: in_search out_search
> > 524291 74169 779 0 0 0 0 0 2 0
> > 1610369017 516 0 0 0 43612 0
> > 524287 74830 777 0 0 0 0 0 6 0
> > 0 498 0 0 0 40504 1
> > 524285 77458 949 0 0 0 0 0 4 0
> > 0 619 0 0 0 42731 1
>
> Linear search is under control and number of dst entries very high but
> very constant at the cost of calling GC a number of times second. But I
> don't understand why we do not see any GC ignored.
When does GC normally ignore?
> Did you ever write to
> gc_min_interval in proc?
I left /proc/sys/net/ipv4/route/gc_min_interval at zero.
> Never seen rtstat's like this but it seems to do the job.
More numbers from right now with higher PPS rate.
size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot mc
GC: tot ignored goal_miss ovrf HASH: in_search out_search
524291 92192 848 0 0 0 0 0 17 0
537114631 553 0 0 0 50947 18
524287 95496 846 0 0 0 0 0 4 0
1610369017 539 0 0 0 52000 4
524293 98503 791 0 0 0 0 0 7 0
0 525 0 0 0 53119 3
524290 98711 965 0 0 0 0 0 3 0
0 626 0 0 0 53448 3
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-20 17:14 ` Jeremy M. Guthrie
@ 2005-01-20 21:53 ` Robert Olsson
2005-01-21 21:20 ` Jeremy M. Guthrie
2005-01-21 15:23 ` Robert Olsson
2005-01-31 15:37 ` Jeremy M. Guthrie
2 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-20 21:53 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
> > > mc GC: tot ignored goal_miss ovrf HASH: in_search out_search
> > > 524287 74830 777 0 0 0 0 0 6 0
> > > 0 498 0 0 0 40504 1
> When does GC normally ignore?
At a min ip_rt_gc_min_interval = HZ / 2;
From what I understand your GC runs more often than that.
Do your understand? Test is in rt_garbage_collect()
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-20 17:14 ` Jeremy M. Guthrie
2005-01-20 21:53 ` Robert Olsson
@ 2005-01-21 15:23 ` Robert Olsson
2005-01-21 21:24 ` Jeremy M. Guthrie
2005-01-31 15:37 ` Jeremy M. Guthrie
2 siblings, 1 reply; 88+ messages in thread
From: Robert Olsson @ 2005-01-21 15:23 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
We have to understand the GC details to 100% confident. Some time to digest
and a may reboot help.
I also mentioned routing without the route hash. No spinning in route
hash and no GC but we trade this for doing a fib lookup for every packet.
Jamal and I started to play with this at last OLS but it's only tested
in lab sofar.
For environments with small number of flows it's probably not a good idea
Length of flows is of course also important. No idea about the result in
your case.
I have a patch:
ftp:/robur.slu.se:/pub/Linux/net-development/preroute/preroute10.pat
If you feel very brave you can test this. The files to monitor is
/proc/net/softnet_stat this verify the new packet path.
With rtstat (tot) should show all routed packets. The rate in pps
we are able to achieve.
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-20 21:53 ` Robert Olsson
@ 2005-01-21 21:20 ` Jeremy M. Guthrie
0 siblings, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-21 21:20 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
On Thursday 20 January 2005 03:53 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > > > mc GC: tot ignored goal_miss ovrf HASH: in_search out_search
> > > > 524287 74830 777 0 0 0 0 0 6
> > > > 0 0 498 0 0 0 40504 1
> >
> > When does GC normally ignore?
>
> At a min ip_rt_gc_min_interval = HZ / 2;
>
> From what I understand your GC runs more often than that.
> Do your understand? Test is in rt_garbage_collect()
I'll look in the function and let you know if I have any questions.
Thanks
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-21 15:23 ` Robert Olsson
@ 2005-01-21 21:24 ` Jeremy M. Guthrie
0 siblings, 0 replies; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-21 21:24 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
On Friday 21 January 2005 09:23 am, Robert Olsson wrote:
> We have to understand the GC details to 100% confident. Some time to digest
> and a may reboot help.
> I also mentioned routing without the route hash. No spinning in route
> hash and no GC but we trade this for doing a fib lookup for every packet.
> Jamal and I started to play with this at last OLS but it's only tested
> in lab sofar.
I'd like to hold off on that right now. Things are running pretty good right
now. I'm trying to get my hands on a 'test' system to run side-by-side with
the production system. We go live Feb 1 with our app so I need to start
toning down some of the down time.
> For environments with small number of flows it's probably not a good idea
> Length of flows is of course also important. No idea about the result in
> your case.
>
> I have a patch:
>
> ftp:/robur.slu.se:/pub/Linux/net-development/preroute/preroute10.pat
>
> If you feel very brave you can test this. The files to monitor is
> /proc/net/softnet_stat this verify the new packet path.
>
> With rtstat (tot) should show all routed packets. The rate in pps
> we are able to achieve.
I actually think I could have our other test system ready early next week.
I'll let you know.
Have a good weekend.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-20 17:14 ` Jeremy M. Guthrie
2005-01-20 21:53 ` Robert Olsson
2005-01-21 15:23 ` Robert Olsson
@ 2005-01-31 15:37 ` Jeremy M. Guthrie
2005-01-31 18:06 ` Robert Olsson
2 siblings, 1 reply; 88+ messages in thread
From: Jeremy M. Guthrie @ 2005-01-31 15:37 UTC (permalink / raw)
To: netdev; +Cc: Robert Olsson
[-- Attachment #1: Type: text/plain, Size: 532 bytes --]
Just wanted to verify that by the term linear search from rtstat/lnstat, you
mean a search of the buckets for a particular hash?
We're going live tomorrow with the app. Afterwards I'll load up the no route
caching patch and test that.
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: V2.4 policy router operates faster/better than V2.6
2005-01-31 15:37 ` Jeremy M. Guthrie
@ 2005-01-31 18:06 ` Robert Olsson
0 siblings, 0 replies; 88+ messages in thread
From: Robert Olsson @ 2005-01-31 18:06 UTC (permalink / raw)
To: jeremy.guthrie; +Cc: netdev, Robert Olsson
Jeremy M. Guthrie writes:
> Just wanted to verify that by the term linear search from rtstat/lnstat, you
> mean a search of the buckets for a particular hash?
Yes.
> We're going live tomorrow with the app. Afterwards I'll load up the no route
> caching patch and test that.
Ok! Look at the patch just posted to davem. We should able view and control
gc_min_interval better.
net.ipv4.route.gc_min_interval_ms = 300
net.ipv4.route.gc_min_interval = 0
--ro
^ permalink raw reply [flat|nested] 88+ messages in thread
end of thread, other threads:[~2005-01-31 18:06 UTC | newest]
Thread overview: 88+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-01-03 20:55 V2.4 policy router operates faster/better than V2.6 Jeremy M. Guthrie
2005-01-03 22:51 ` Stephen Hemminger
2005-01-03 22:56 ` Jeremy M. Guthrie
2005-01-05 13:18 ` Robert Olsson
2005-01-05 15:18 ` Jeremy M. Guthrie
2005-01-05 16:30 ` Robert Olsson
2005-01-05 17:35 ` Jeremy M. Guthrie
2005-01-05 19:25 ` Jeremy M. Guthrie
2005-01-05 20:22 ` Robert Olsson
2005-01-05 20:52 ` Jeremy M. Guthrie
2005-01-06 15:26 ` Jeremy M. Guthrie
2005-01-06 18:15 ` Robert Olsson
2005-01-06 19:35 ` Jeremy M. Guthrie
2005-01-06 20:29 ` Robert Olsson
2005-01-06 20:54 ` Jeremy M. Guthrie
2005-01-06 20:55 ` Jeremy M. Guthrie
2005-01-06 21:19 ` Jeremy M. Guthrie
2005-01-06 21:36 ` Robert Olsson
2005-01-06 21:46 ` Jeremy M. Guthrie
2005-01-06 22:11 ` Robert Olsson
2005-01-06 22:18 ` Jeremy M. Guthrie
2005-01-06 22:35 ` Robert Olsson
2005-01-07 16:17 ` Jeremy M. Guthrie
2005-01-07 19:18 ` Robert Olsson
2005-01-07 19:38 ` Jeremy M. Guthrie
2005-01-07 20:07 ` Robert Olsson
2005-01-07 20:14 ` Jeremy M. Guthrie
2005-01-07 20:40 ` Robert Olsson
2005-01-07 21:06 ` Jeremy M. Guthrie
2005-01-07 21:30 ` Robert Olsson
2005-01-11 15:11 ` Jeremy M. Guthrie
2005-01-07 22:28 ` Jesse Brandeburg
2005-01-07 22:50 ` Jeremy M. Guthrie
2005-01-07 22:57 ` Stephen Hemminger
2005-01-11 15:17 ` Jeremy M. Guthrie
2005-01-11 16:40 ` Robert Olsson
2005-01-12 1:27 ` Jeremy M. Guthrie
2005-01-12 15:11 ` Robert Olsson
2005-01-12 16:24 ` Jeremy M. Guthrie
2005-01-12 19:27 ` Robert Olsson
2005-01-12 20:11 ` Jeremy M. Guthrie
2005-01-12 20:21 ` Robert Olsson
2005-01-12 20:30 ` Jeremy M. Guthrie
2005-01-12 20:45 ` Jeremy M. Guthrie
2005-01-12 22:02 ` Robert Olsson
2005-01-12 22:21 ` Jeremy M. Guthrie
[not found] ` <16869.42247.126428.508479@robur.slu.se>
2005-01-12 22:42 ` Jeremy M. Guthrie
2005-01-12 22:47 ` Jeremy M. Guthrie
2005-01-12 23:19 ` Robert Olsson
2005-01-12 23:23 ` Jeremy M. Guthrie
2005-01-13 8:56 ` Robert Olsson
2005-01-13 19:28 ` Jeremy M. Guthrie
2005-01-13 20:00 ` David S. Miller
2005-01-13 20:43 ` Jeremy M. Guthrie
2005-01-13 23:13 ` David S. Miller
2005-01-13 21:12 ` Robert Olsson
2005-01-13 22:27 ` Jeremy M. Guthrie
2005-01-14 15:44 ` Robert Olsson
2005-01-14 14:59 ` Jeremy M. Guthrie
2005-01-14 16:05 ` Robert Olsson
2005-01-14 19:00 ` Jeremy M. Guthrie
2005-01-14 19:26 ` Jeremy M. Guthrie
2005-01-16 12:32 ` Robert Olsson
2005-01-16 16:22 ` Jeremy M. Guthrie
2005-01-19 15:03 ` Jeremy M. Guthrie
2005-01-19 22:18 ` Robert Olsson
2005-01-20 1:50 ` Jeremy M. Guthrie
2005-01-20 11:30 ` Robert Olsson
2005-01-20 14:37 ` Jeremy M. Guthrie
2005-01-20 17:01 ` Robert Olsson
2005-01-20 17:14 ` Jeremy M. Guthrie
2005-01-20 21:53 ` Robert Olsson
2005-01-21 21:20 ` Jeremy M. Guthrie
2005-01-21 15:23 ` Robert Olsson
2005-01-21 21:24 ` Jeremy M. Guthrie
2005-01-31 15:37 ` Jeremy M. Guthrie
2005-01-31 18:06 ` Robert Olsson
2005-01-12 22:05 ` Jeremy M. Guthrie
2005-01-12 22:22 ` Robert Olsson
2005-01-12 22:30 ` Jeremy M. Guthrie
2005-01-11 17:17 ` Jeremy M. Guthrie
2005-01-11 18:46 ` Robert Olsson
2005-01-12 1:30 ` Jeremy M. Guthrie
2005-01-12 16:02 ` Robert Olsson
2005-01-04 15:07 ` Jeremy M. Guthrie
[not found] <200501071619.54566.jeremy.guthrie@berbee.com>
2005-01-07 23:23 ` Jesse Brandeburg
2005-01-10 21:11 ` Jeremy M. Guthrie
[not found] <C925F8B43D79CC49ACD0601FB68FF50C02D39006@orsmsx408>
2005-01-13 22:55 ` Jeremy M. Guthrie
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).