From: "Jeremy M. Guthrie" <jeremy.guthrie@berbee.com>
To: netdev@oss.sgi.com
Cc: Robert Olsson <Robert.Olsson@data.slu.se>
Subject: Re: V2.4 policy router operates faster/better than V2.6
Date: Thu, 13 Jan 2005 16:27:17 -0600 [thread overview]
Message-ID: <200501131627.20360.jeremy.guthrie@berbee.com> (raw)
In-Reply-To: <16870.58414.767012.96364@robur.slu.se>
[-- Attachment #1: Type: text/plain, Size: 8418 bytes --]
On Thursday 13 January 2005 03:12 pm, Robert Olsson wrote:
> Jeremy M. Guthrie writes:
> > I after a few revs I just bumped rhash_entries to 2.4mil in an attempt
> > to get well above my actual usage.
>
> A bit hefty size :-) But the stats are looking much better as we do much
> less linear search (in_search) in hash and less fib lookups (tot)
Okay.
> And you have now "dst cache overflows"?
No, I haven't gotten any of these yet.
> Is the e1000 patch I sent in use?
yes. I also have another E1000 driver I haven't had a chance to try yet. It
is a bit more instrumented.
> > You can see below I am over 600K entries before it blows them away and
> > restarts.
>
> This is a part of GC process to reclaim memory and reclaim unused dst
> entries. This
>
> > size IN: hit tot mc no_rt bcast madst masrc OUT: hit tot
> > mc GC: tot ignored goal_miss ovrf HASH: in_search out_search
> >
> >
> > 615852 86368 626 0 0 0 0 0 8 0
> > 0 624 622 0 0 56504 10
> > 493558 47553 4603 0 0 0 0 0 2 0
> > 0 4166 4164 0 0 28346 0
> > 10091 46526 7096 0 0 0 0 0 2 3
> > 0 0 0 0 0 554 0
> > 16238 80565 6145 0 0 0 0 0 6 3
> > 0 0 0 0 0 1334 0
>
> In short we reduce the hash size to remove unused flows and let it grow
> again. You see from (tot) that we have recreate may of the flows at this
> point. Most likely this is where we drop the packets. We have monitored
> small drops in our system when GC happens. The GC can be smoothen out but
> we leave that for now.
Sorry, not quite following.
IN:Hits are cache hits yes? Tot, are the total number of flows created since
we last looked at the total flow count, correct? What would cause a packet
to drop in the network stack and thus showup in /proc/net/softnet_stat?
> > How do I bump up the time from 10 minutes to something longer?
>
> Davem pointed out another periodic task thats flushes the cache totally
> it's
>
> /proc/sys/net/ipv4/route/secret_interval
>
> It flushes the cache totally we so all current flows has be recreated. You
> probably drop packets here in your setup. Yes it can be idea to increase
> it or run the flush manually. But most routers drop packets now and then.
If I set the secret_interval to 60 seconds then I drop over 1% of all packets
coming through. So GC isn't exactly my friend.
Performance has picked up. I am not dropping packets anymore except during
GC. I upped my interval from 600 seconds to 1800 seconds.
Here are 15 second snapshots. Line 3 appears to be when GC take effect.
Afterwards, everything stabilizes. These numbers are much better.
Thu Jan 13 16:10:30 CST 2005 entries: 000de44a Packets: 1255162 Errors: 0
PPS: 83677 Percentage: 0.0%
Thu Jan 13 16:10:45 CST 2005 entries: 000df2ad Packets: 1303050 Errors:
3875 PPS: 86870 Percentage: 0.29%
Thu Jan 13 16:11:00 CST 2005 entries: 0000b053 Packets: 1265398 Errors:
38586 PPS: 84359 Percentage: 3.04%
Thu Jan 13 16:11:15 CST 2005 entries: 00013df8 Packets: 1310618 Errors: 0
PPS: 87374 Percentage: 0.0%
Thu Jan 13 16:11:30 CST 2005 entries: 0001b527 Packets: 1282435 Errors: 0
PPS: 85495 Percentage: 0.0%
Thu Jan 13 16:11:45 CST 2005 entries: 000222bb Packets: 1213217 Errors: 0
PPS: 80881 Percentage: 0.0%
Thu Jan 13 16:12:01 CST 2005 entries: 00027c7e Packets: 1279811 Errors: 0
PPS: 85320 Percentage: 0.0%
Thu Jan 13 16:12:16 CST 2005 entries: 0002c5d5 Packets: 1224232 Errors: 0
PPS: 81615 Percentage: 0.0%
Thu Jan 13 16:12:31 CST 2005 entries: 0003090c Packets: 1243539 Errors: 0
PPS: 82902 Percentage: 0.0%
Thu Jan 13 16:12:46 CST 2005 entries: 00034d41 Packets: 1267200 Errors: 0
PPS: 84480 Percentage: 0.0%
Thu Jan 13 16:13:01 CST 2005 entries: 00038f82 Packets: 1238821 Errors: 0
PPS: 82588 Percentage: 0.0%
Thu Jan 13 16:13:16 CST 2005 entries: 0003cf6a Packets: 1245474 Errors: 0
PPS: 83031 Percentage: 0.0%
Thu Jan 13 16:13:31 CST 2005 entries: 00040d23 Packets: 1266478 Errors: 0
PPS: 84431 Percentage: 0.0%
Thu Jan 13 16:13:46 CST 2005 entries: 00044918 Packets: 1247576 Errors: 0
PPS: 83171 Percentage: 0.0%
Thu Jan 13 16:14:01 CST 2005 entries: 00048520 Packets: 1223002 Errors: 0
PPS: 81533 Percentage: 0.0%
Thu Jan 13 16:14:16 CST 2005 entries: 0004c0b6 Packets: 1303942 Errors:
333 PPS: 86929 Percentage: 0.2%
Thu Jan 13 16:14:32 CST 2005 entries: 0004f83e Packets: 1203334 Errors: 0
PPS: 80222 Percentage: 0.0%
Thu Jan 13 16:14:47 CST 2005 entries: 00053241 Packets: 1216611 Errors: 0
PPS: 81107 Percentage: 0.0%
Thu Jan 13 16:15:02 CST 2005 entries: 00056f97 Packets: 1281206 Errors: 0
PPS: 85413 Percentage: 0.0%
Thu Jan 13 16:15:17 CST 2005 entries: 0005b020 Packets: 1270007 Errors: 0
PPS: 84667 Percentage: 0.0%
Thu Jan 13 16:15:32 CST 2005 entries: 0005eb63 Packets: 1250099 Errors: 0
PPS: 83339 Percentage: 0.0%
Thu Jan 13 16:15:47 CST 2005 entries: 00061e08 Packets: 1183444 Errors: 0
PPS: 78896 Percentage: 0.0%
Thu Jan 13 16:16:02 CST 2005 entries: 0006489b Packets: 1246170 Errors:
3791 PPS: 83078 Percentage: 0.30%
Thu Jan 13 16:16:17 CST 2005 entries: 00066f1f Packets: 1233601 Errors:
4141 PPS: 82240 Percentage: 0.33%
Thu Jan 13 16:16:32 CST 2005 entries: 000695aa Packets: 1273744 Errors:
3798 PPS: 84916 Percentage: 0.29%
Thu Jan 13 16:16:47 CST 2005 entries: 0006ba5d Packets: 1263619 Errors:
4219 PPS: 84241 Percentage: 0.33%
Thu Jan 13 16:17:03 CST 2005 entries: 0006df19 Packets: 1240743 Errors:
3616 PPS: 82716 Percentage: 0.29%
----------one other snapshot------------
Thu Jan 13 16:09:03 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
RX packets:459165122 errors:3427143 dropped:3427143 overruns:2045357
frame:0
1b5e031d 00000000 0000a829 00000000 00000000 00000000 00000000 00000000
0002cbd7
000072c1 00000000 00000001 00000000 00000000 00000000 00000000 00000000
00001e00
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
000d92e0 1a0ecfdc 014e19f9 00000000 00000000 000000a6 000000df 00000000
00009558 00000c5e 00000000 000b7605 000b6c68 00000000 00000000 07c9547f
0000398d
000d92e0 00001340 00005e40 00000000 00000000 0000005e 00000000 00000000
00000007 00000036 00000002 00000002 00000002 00000000 00000000 00001542
00000004
CPU0 CPU1
18: 123586344 8007 IO-APIC-level eth3
20: 1 18109191 IO-APIC-level eth2
Thu Jan 13 16:10:03 CST 2005
eth3 Link encap:Ethernet HWaddr 00:02:B3:D5:7E:30
RX packets:464242944 errors:3427143 dropped:3427143 overruns:2045357
frame:0
1bab839b 00000000 0000a82d 00000000 00000000 00000000 00000000 00000000
0002d2bc
000072e3 00000000 00000001 00000000 00000000 00000000 00000000 00000000
00001ed8
entries in_hit in_slow_tot in_no_route in_brd in_martian_dst in_martian_src
out_hit out_slow_tot out_slow_mc gc_total gc_ignored gc_goal_miss
gc_dst_overflow in_hlist_search out_hlist_search
000dcaba 1a5bd4fd 014e9288 00000000 00000000 000000a6 000000df 00000000
00009678 00000c6a 00000000 000bee9e 000be489 00000000 00000000 08109f0f
00003a97
000dcaba 00001349 00005e58 00000000 00000000 0000005e 00000000 00000000
00000007 00000036 00000002 00000002 00000002 00000000 00000000 00001597
00000004
CPU0 CPU1
18: 125388992 8007 IO-APIC-level eth3
20: 1 18340497 IO-APIC-level eth2
--
--------------------------------------------------
Jeremy M. Guthrie jeremy.guthrie@berbee.com
Senior Network Engineer Phone: 608-298-1061
Berbee Fax: 608-288-3007
5520 Research Park Drive NOC: 608-298-1102
Madison, WI 53711
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
next prev parent reply other threads:[~2005-01-13 22:27 UTC|newest]
Thread overview: 88+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-01-03 20:55 V2.4 policy router operates faster/better than V2.6 Jeremy M. Guthrie
2005-01-03 22:51 ` Stephen Hemminger
2005-01-03 22:56 ` Jeremy M. Guthrie
2005-01-05 13:18 ` Robert Olsson
2005-01-05 15:18 ` Jeremy M. Guthrie
2005-01-05 16:30 ` Robert Olsson
2005-01-05 17:35 ` Jeremy M. Guthrie
2005-01-05 19:25 ` Jeremy M. Guthrie
2005-01-05 20:22 ` Robert Olsson
2005-01-05 20:52 ` Jeremy M. Guthrie
2005-01-06 15:26 ` Jeremy M. Guthrie
2005-01-06 18:15 ` Robert Olsson
2005-01-06 19:35 ` Jeremy M. Guthrie
2005-01-06 20:29 ` Robert Olsson
2005-01-06 20:54 ` Jeremy M. Guthrie
2005-01-06 20:55 ` Jeremy M. Guthrie
2005-01-06 21:19 ` Jeremy M. Guthrie
2005-01-06 21:36 ` Robert Olsson
2005-01-06 21:46 ` Jeremy M. Guthrie
2005-01-06 22:11 ` Robert Olsson
2005-01-06 22:18 ` Jeremy M. Guthrie
2005-01-06 22:35 ` Robert Olsson
2005-01-07 16:17 ` Jeremy M. Guthrie
2005-01-07 19:18 ` Robert Olsson
2005-01-07 19:38 ` Jeremy M. Guthrie
2005-01-07 20:07 ` Robert Olsson
2005-01-07 20:14 ` Jeremy M. Guthrie
2005-01-07 20:40 ` Robert Olsson
2005-01-07 21:06 ` Jeremy M. Guthrie
2005-01-07 21:30 ` Robert Olsson
2005-01-11 15:11 ` Jeremy M. Guthrie
2005-01-07 22:28 ` Jesse Brandeburg
2005-01-07 22:50 ` Jeremy M. Guthrie
2005-01-07 22:57 ` Stephen Hemminger
2005-01-11 15:17 ` Jeremy M. Guthrie
2005-01-11 16:40 ` Robert Olsson
2005-01-12 1:27 ` Jeremy M. Guthrie
2005-01-12 15:11 ` Robert Olsson
2005-01-12 16:24 ` Jeremy M. Guthrie
2005-01-12 19:27 ` Robert Olsson
2005-01-12 20:11 ` Jeremy M. Guthrie
2005-01-12 20:21 ` Robert Olsson
2005-01-12 20:30 ` Jeremy M. Guthrie
2005-01-12 20:45 ` Jeremy M. Guthrie
2005-01-12 22:02 ` Robert Olsson
2005-01-12 22:21 ` Jeremy M. Guthrie
[not found] ` <16869.42247.126428.508479@robur.slu.se>
2005-01-12 22:42 ` Jeremy M. Guthrie
2005-01-12 22:47 ` Jeremy M. Guthrie
2005-01-12 23:19 ` Robert Olsson
2005-01-12 23:23 ` Jeremy M. Guthrie
2005-01-13 8:56 ` Robert Olsson
2005-01-13 19:28 ` Jeremy M. Guthrie
2005-01-13 20:00 ` David S. Miller
2005-01-13 20:43 ` Jeremy M. Guthrie
2005-01-13 23:13 ` David S. Miller
2005-01-13 21:12 ` Robert Olsson
2005-01-13 22:27 ` Jeremy M. Guthrie [this message]
2005-01-14 15:44 ` Robert Olsson
2005-01-14 14:59 ` Jeremy M. Guthrie
2005-01-14 16:05 ` Robert Olsson
2005-01-14 19:00 ` Jeremy M. Guthrie
2005-01-14 19:26 ` Jeremy M. Guthrie
2005-01-16 12:32 ` Robert Olsson
2005-01-16 16:22 ` Jeremy M. Guthrie
2005-01-19 15:03 ` Jeremy M. Guthrie
2005-01-19 22:18 ` Robert Olsson
2005-01-20 1:50 ` Jeremy M. Guthrie
2005-01-20 11:30 ` Robert Olsson
2005-01-20 14:37 ` Jeremy M. Guthrie
2005-01-20 17:01 ` Robert Olsson
2005-01-20 17:14 ` Jeremy M. Guthrie
2005-01-20 21:53 ` Robert Olsson
2005-01-21 21:20 ` Jeremy M. Guthrie
2005-01-21 15:23 ` Robert Olsson
2005-01-21 21:24 ` Jeremy M. Guthrie
2005-01-31 15:37 ` Jeremy M. Guthrie
2005-01-31 18:06 ` Robert Olsson
2005-01-12 22:05 ` Jeremy M. Guthrie
2005-01-12 22:22 ` Robert Olsson
2005-01-12 22:30 ` Jeremy M. Guthrie
2005-01-11 17:17 ` Jeremy M. Guthrie
2005-01-11 18:46 ` Robert Olsson
2005-01-12 1:30 ` Jeremy M. Guthrie
2005-01-12 16:02 ` Robert Olsson
2005-01-04 15:07 ` Jeremy M. Guthrie
[not found] <200501071619.54566.jeremy.guthrie@berbee.com>
2005-01-07 23:23 ` Jesse Brandeburg
2005-01-10 21:11 ` Jeremy M. Guthrie
[not found] <C925F8B43D79CC49ACD0601FB68FF50C02D39006@orsmsx408>
2005-01-13 22:55 ` Jeremy M. Guthrie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200501131627.20360.jeremy.guthrie@berbee.com \
--to=jeremy.guthrie@berbee.com \
--cc=Robert.Olsson@data.slu.se \
--cc=netdev@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).